Test Report: Docker_Linux_crio 19689

                    
                      af422e057ba227eec8656c67d09f56de251f325e:2024-09-23:36336
                    
                

Test fail (3/327)

Order failed test Duration
33 TestAddons/parallel/Registry 72.9
34 TestAddons/parallel/Ingress 156.95
36 TestAddons/parallel/MetricsServer 340.75
x
+
TestAddons/parallel/Registry (72.9s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:328: registry stabilized in 3.319894ms
addons_test.go:330: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
addons_test.go:330: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.002688251s
addons_test.go:333: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
addons_test.go:333: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003299323s
addons_test.go:338: (dbg) Run:  kubectl --context addons-445250 delete po -l run=registry-test --now
addons_test.go:343: (dbg) Run:  kubectl --context addons-445250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:343: (dbg) Non-zero exit: kubectl --context addons-445250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": exit status 1 (1m0.074686154s)

                                                
                                                
-- stdout --
	pod "registry-test" deleted

                                                
                                                
-- /stdout --
** stderr ** 
	error: timed out waiting for the condition

                                                
                                                
** /stderr **
addons_test.go:345: failed to hit registry.kube-system.svc.cluster.local. args "kubectl --context addons-445250 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c \"wget --spider -S http://registry.kube-system.svc.cluster.local\"" failed: exit status 1
addons_test.go:349: expected curl response be "HTTP/1.1 200", but got *pod "registry-test" deleted
*
addons_test.go:357: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 ip
2024/09/23 10:34:27 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:386: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable registry --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Registry]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-445250
helpers_test.go:235: (dbg) docker inspect addons-445250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de",
	        "Created": "2024-09-23T10:22:07.858444399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:22:07.992183864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hostname",
	        "HostsPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hosts",
	        "LogPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de-json.log",
	        "Name": "/addons-445250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-445250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-445250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9-init/diff:/var/lib/docker/overlay2/7d643569ae4970466837c9a65113e736da4066b6ecef95c8dfd4e28343439fd4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-445250",
	                "Source": "/var/lib/docker/volumes/addons-445250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-445250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-445250",
	                "name.minikube.sigs.k8s.io": "addons-445250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11702f683be50ee88e7771ed6cf42c56a8b968ee9233079204792fc15e16ca3a",
	            "SandboxKey": "/var/run/docker/netns/11702f683be5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-445250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6e9e6c600c8a794f7091380417d6269c6bcfab6c9ff820d67e47faecc18d66e9",
	                    "EndpointID": "e2a135f221a1a3480c5eff902d6dc55c09d0804810f708c60a366ec74feb8c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-445250",
	                        "13e368cd79e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-445250 -n addons-445250
helpers_test.go:244: <<< TestAddons/parallel/Registry FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Registry]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 logs -n 25: (1.224576297s)
helpers_test.go:252: TestAddons/parallel/Registry logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only                                                                     | download-only-764506   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-764506                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-764506                                                                     | download-only-764506   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only                                                                     | download-only-662224   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-662224                                                                     |                        |         |         |                     |                     |
	|         | --force --alsologtostderr                                                                   |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1                                                                |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | --all                                                                                       | minikube               | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-662224                                                                     | download-only-662224   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-764506                                                                     | download-only-764506   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-662224                                                                     | download-only-662224   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | download-docker-581243                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-581243                                                                   | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-083835                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40991                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083835                                                                     | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-445250 --wait=true                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh curl -s                                                                   | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh cat                                                                       | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | /opt/local-path-provisioner/pvc-f2f3f271-6db1-4176-931b-e93dd714c1c9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC |                     |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-445250 ip                                                                            | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:46.722935   11967 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:46.723042   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723048   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:46.723052   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723211   11967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:21:46.723833   11967 out.go:352] Setting JSON to false
	I0923 10:21:46.724726   11967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":251,"bootTime":1727086656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:46.724818   11967 start.go:139] virtualization: kvm guest
	I0923 10:21:46.726917   11967 out.go:177] * [addons-445250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:46.728496   11967 notify.go:220] Checking for updates...
	I0923 10:21:46.728529   11967 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:46.730127   11967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:46.731529   11967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:21:46.733032   11967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:21:46.734520   11967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:46.735940   11967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:46.737437   11967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:46.757864   11967 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:21:46.757943   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.804617   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.795429084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.804761   11967 docker.go:318] overlay module found
	I0923 10:21:46.807023   11967 out.go:177] * Using the docker driver based on user configuration
	I0923 10:21:46.808457   11967 start.go:297] selected driver: docker
	I0923 10:21:46.808470   11967 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:46.808480   11967 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:46.809252   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.853138   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.844831844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.853280   11967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:46.853569   11967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:46.855475   11967 out.go:177] * Using Docker driver with root privileges
	I0923 10:21:46.856837   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:21:46.856896   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:21:46.856908   11967 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:46.856965   11967 start.go:340] cluster config:
	{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:46.858565   11967 out.go:177] * Starting "addons-445250" primary control-plane node in "addons-445250" cluster
	I0923 10:21:46.859951   11967 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 10:21:46.861523   11967 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:46.862889   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:46.862932   11967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:46.862943   11967 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:46.862994   11967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:46.863034   11967 preload.go:172] Found /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:46.863044   11967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:46.863345   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:21:46.863370   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json: {Name:mk54c5258400406bc02a0be01645830e04ed3533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:46.878981   11967 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:46.879106   11967 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:46.879123   11967 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:46.879127   11967 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:46.879134   11967 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:46.879141   11967 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:21:59.079658   11967 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:21:59.079699   11967 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:21:59.079749   11967 start.go:360] acquireMachinesLock for addons-445250: {Name:mk58626d6fa4f17f6f629476491054fee819afac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:59.079854   11967 start.go:364] duration metric: took 81.967µs to acquireMachinesLock for "addons-445250"
	I0923 10:21:59.079884   11967 start.go:93] Provisioning new machine with config: &{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:59.079961   11967 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:21:59.082680   11967 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:21:59.082908   11967 start.go:159] libmachine.API.Create for "addons-445250" (driver="docker")
	I0923 10:21:59.082939   11967 client.go:168] LocalClient.Create starting
	I0923 10:21:59.083053   11967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem
	I0923 10:21:59.283728   11967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem
	I0923 10:21:59.338041   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:21:59.353789   11967 cli_runner.go:211] docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:21:59.353863   11967 network_create.go:284] running [docker network inspect addons-445250] to gather additional debugging logs...
	I0923 10:21:59.353885   11967 cli_runner.go:164] Run: docker network inspect addons-445250
	W0923 10:21:59.368954   11967 cli_runner.go:211] docker network inspect addons-445250 returned with exit code 1
	I0923 10:21:59.368983   11967 network_create.go:287] error running [docker network inspect addons-445250]: docker network inspect addons-445250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-445250 not found
	I0923 10:21:59.368994   11967 network_create.go:289] output of [docker network inspect addons-445250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-445250 not found
	
	** /stderr **
	I0923 10:21:59.369064   11967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:59.384645   11967 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b467a0}
	I0923 10:21:59.384701   11967 network_create.go:124] attempt to create docker network addons-445250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:21:59.384762   11967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-445250 addons-445250
	I0923 10:21:59.445035   11967 network_create.go:108] docker network addons-445250 192.168.49.0/24 created
	I0923 10:21:59.445065   11967 kic.go:121] calculated static IP "192.168.49.2" for the "addons-445250" container
	I0923 10:21:59.445131   11967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:21:59.460629   11967 cli_runner.go:164] Run: docker volume create addons-445250 --label name.minikube.sigs.k8s.io=addons-445250 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:21:59.476907   11967 oci.go:103] Successfully created a docker volume addons-445250
	I0923 10:21:59.476979   11967 cli_runner.go:164] Run: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:22:03.434642   11967 cli_runner.go:217] Completed: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (3.957618145s)
	I0923 10:22:03.434674   11967 oci.go:107] Successfully prepared a docker volume addons-445250
	I0923 10:22:03.434699   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:03.434718   11967 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:22:03.434769   11967 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:22:07.800698   11967 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.365884505s)
	I0923 10:22:07.800727   11967 kic.go:203] duration metric: took 4.366005266s to extract preloaded images to volume ...
	W0923 10:22:07.800860   11967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:22:07.800985   11967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:22:07.843740   11967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-445250 --name addons-445250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-445250 --network addons-445250 --ip 192.168.49.2 --volume addons-445250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:22:08.145428   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Running}}
	I0923 10:22:08.163069   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.180280   11967 cli_runner.go:164] Run: docker exec addons-445250 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:22:08.223991   11967 oci.go:144] the created container "addons-445250" has a running status.
	I0923 10:22:08.224039   11967 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa...
	I0923 10:22:08.349744   11967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:22:08.370308   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.394245   11967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:22:08.394268   11967 kic_runner.go:114] Args: [docker exec --privileged addons-445250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:22:08.436001   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.455362   11967 machine.go:93] provisionDockerMachine start ...
	I0923 10:22:08.455457   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:08.480578   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:08.480844   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:08.480858   11967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:22:08.481650   11967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44746->127.0.0.1:32768: read: connection reset by peer
	I0923 10:22:11.613107   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.613148   11967 ubuntu.go:169] provisioning hostname "addons-445250"
	I0923 10:22:11.613220   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.632203   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.632375   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.632389   11967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-445250 && echo "addons-445250" | sudo tee /etc/hostname
	I0923 10:22:11.772148   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.772239   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.793347   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.793545   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.793571   11967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-445250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-445250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-445250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:11.921432   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:11.921466   11967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3772/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3772/.minikube}
	I0923 10:22:11.921533   11967 ubuntu.go:177] setting up certificates
	I0923 10:22:11.921552   11967 provision.go:84] configureAuth start
	I0923 10:22:11.921640   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:11.937581   11967 provision.go:143] copyHostCerts
	I0923 10:22:11.937653   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/key.pem (1679 bytes)
	I0923 10:22:11.937757   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/ca.pem (1082 bytes)
	I0923 10:22:11.937816   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/cert.pem (1123 bytes)
	I0923 10:22:11.937865   11967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem org=jenkins.addons-445250 san=[127.0.0.1 192.168.49.2 addons-445250 localhost minikube]
	I0923 10:22:12.190566   11967 provision.go:177] copyRemoteCerts
	I0923 10:22:12.190629   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:12.190662   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.207913   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.301604   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:22:12.323506   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:12.345626   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:22:12.366986   11967 provision.go:87] duration metric: took 445.417004ms to configureAuth
	I0923 10:22:12.367016   11967 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:22:12.367177   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:12.367273   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.384149   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:12.384351   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:12.384365   11967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:12.601161   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:12.601191   11967 machine.go:96] duration metric: took 4.145798692s to provisionDockerMachine
	I0923 10:22:12.601205   11967 client.go:171] duration metric: took 13.518254951s to LocalClient.Create
	I0923 10:22:12.601232   11967 start.go:167] duration metric: took 13.518321061s to libmachine.API.Create "addons-445250"
	I0923 10:22:12.601243   11967 start.go:293] postStartSetup for "addons-445250" (driver="docker")
	I0923 10:22:12.601256   11967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:12.601330   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:12.601386   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.617703   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.710189   11967 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:12.713341   11967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:22:12.713372   11967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:22:12.713380   11967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:22:12.713387   11967 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:22:12.713396   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/addons for local assets ...
	I0923 10:22:12.713453   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/files for local assets ...
	I0923 10:22:12.713475   11967 start.go:296] duration metric: took 112.225945ms for postStartSetup
	I0923 10:22:12.713792   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.730492   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:22:12.730768   11967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:22:12.730831   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.747370   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.837980   11967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:22:12.841809   11967 start.go:128] duration metric: took 13.761835585s to createHost
	I0923 10:22:12.841831   11967 start.go:83] releasing machines lock for "addons-445250", held for 13.76196327s
	I0923 10:22:12.841880   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.857765   11967 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:12.857812   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.857826   11967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:12.857890   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.875001   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.875855   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:13.035237   11967 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:13.039237   11967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:13.175392   11967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:22:13.179320   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.195856   11967 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:22:13.195931   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.221316   11967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:22:13.221364   11967 start.go:495] detecting cgroup driver to use...
	I0923 10:22:13.221399   11967 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:22:13.221447   11967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:13.235209   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:13.245258   11967 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:13.245304   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:13.257110   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:13.270190   11967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:13.345987   11967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:13.431095   11967 docker.go:233] disabling docker service ...
	I0923 10:22:13.431158   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:13.448504   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:13.459326   11967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:13.538609   11967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:13.627128   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:13.637297   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:13.651328   11967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:13.651409   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.660149   11967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:13.660207   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.668833   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.677566   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.686751   11967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:13.695283   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.704095   11967 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.718346   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.727226   11967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:13.734826   11967 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:13.734883   11967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:13.747287   11967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:13.755093   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:13.829252   11967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:14.158226   11967 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:14.158294   11967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:14.161542   11967 start.go:563] Will wait 60s for crictl version
	I0923 10:22:14.161588   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:22:14.164545   11967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:14.194967   11967 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 10:22:14.195073   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.228259   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.262832   11967 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 10:22:14.264297   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:22:14.279971   11967 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:14.283271   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.293146   11967 kubeadm.go:883] updating cluster {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:14.293287   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:14.293343   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.352262   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.352284   11967 crio.go:433] Images already preloaded, skipping extraction
	I0923 10:22:14.352323   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.382541   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.382561   11967 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:14.382568   11967 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 10:22:14.382655   11967 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-445250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:14.382713   11967 ssh_runner.go:195] Run: crio config
	I0923 10:22:14.424280   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:14.424300   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:14.424309   11967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:14.424330   11967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-445250 NodeName:addons-445250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:14.424465   11967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-445250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:14.424518   11967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:14.432810   11967 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:14.432882   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:14.440979   11967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 10:22:14.456846   11967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:14.473092   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 10:22:14.489063   11967 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:14.492280   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.502541   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:14.581826   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:14.594096   11967 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250 for IP: 192.168.49.2
	I0923 10:22:14.594120   11967 certs.go:194] generating shared ca certs ...
	I0923 10:22:14.594140   11967 certs.go:226] acquiring lock for ca certs: {Name:mkbb719d992584afad4bc806b595dfbc8bf85283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.594259   11967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key
	I0923 10:22:14.681658   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt ...
	I0923 10:22:14.681683   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt: {Name:mk1f9f53ba20e5a2662fcdac9037bc6a4a8fd1b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681837   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key ...
	I0923 10:22:14.681847   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key: {Name:mk52ffe2b2a53346768d26bc1f6d2740c4fc9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681914   11967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key
	I0923 10:22:14.764606   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt ...
	I0923 10:22:14.764633   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt: {Name:mk8f4a9df3471bb1b7cc77d68850cb5575be1691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764782   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key ...
	I0923 10:22:14.764793   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key: {Name:mk637c0032a7e0b43519628027243d2c0d2d6b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764855   11967 certs.go:256] generating profile certs ...
	I0923 10:22:14.764906   11967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key
	I0923 10:22:14.764920   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt with IP's: []
	I0923 10:22:15.005422   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt ...
	I0923 10:22:15.005450   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: {Name:mk4bd69aa7022da3f588d449215ad314ecdb2eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005608   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key ...
	I0923 10:22:15.005620   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key: {Name:mkae46f7c7acf2efdeeb48926276ca9bf1fec02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005682   11967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa
	I0923 10:22:15.005699   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:22:15.404464   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa ...
	I0923 10:22:15.404496   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa: {Name:mk8def3abfe8729e739e9892b8e2dfdfaa975e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404648   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa ...
	I0923 10:22:15.404661   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa: {Name:mkc1cfd8e1a6b6ba70edb50de4cc7a2de96fef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404730   11967 certs.go:381] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt
	I0923 10:22:15.404821   11967 certs.go:385] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key
	I0923 10:22:15.404875   11967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key
	I0923 10:22:15.404901   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt with IP's: []
	I0923 10:22:15.857985   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt ...
	I0923 10:22:15.858015   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt: {Name:mk3ebea646b11f719e3aafe05a2859ab48c62804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858201   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key ...
	I0923 10:22:15.858218   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key: {Name:mk16575e201f9fd127e621495ba0c5bc4e64a79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858432   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:15.858477   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem (1082 bytes)
	I0923 10:22:15.858514   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:15.858544   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem (1679 bytes)
	I0923 10:22:15.859128   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:15.880908   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:15.902039   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:15.923187   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 10:22:15.944338   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:15.965387   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:22:15.986458   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:16.007433   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:16.028442   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:16.050349   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:16.067334   11967 ssh_runner.go:195] Run: openssl version
	I0923 10:22:16.072904   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:16.081554   11967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084699   11967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084740   11967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.091121   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:16.099776   11967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:16.102849   11967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:16.102894   11967 kubeadm.go:392] StartCluster: {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:16.102966   11967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:16.103005   11967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:16.135153   11967 cri.go:89] found id: ""
	I0923 10:22:16.135208   11967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:16.143317   11967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:16.151131   11967 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:22:16.151185   11967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:16.158804   11967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:16.158823   11967 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:16.158860   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:16.166353   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:16.166422   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:16.174207   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:16.181623   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:16.181684   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:16.189018   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.196505   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:16.196565   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.204028   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:16.211652   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:16.211714   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:16.218868   11967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:22:16.253475   11967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:16.254017   11967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:16.269754   11967 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:22:16.269837   11967 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0923 10:22:16.269871   11967 kubeadm.go:310] OS: Linux
	I0923 10:22:16.269959   11967 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:22:16.270050   11967 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:22:16.270128   11967 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:22:16.270202   11967 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:22:16.270274   11967 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:22:16.270360   11967 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:22:16.270417   11967 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:22:16.270469   11967 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:22:16.270521   11967 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:22:16.318273   11967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:16.318402   11967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:16.318562   11967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:16.324445   11967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:16.327392   11967 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:16.327503   11967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:16.327598   11967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:16.461803   11967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:16.741266   11967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:16.849130   11967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:17.176671   11967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:17.429269   11967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:17.429471   11967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.596676   11967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:17.596789   11967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.788256   11967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:17.876354   11967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:18.471196   11967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:18.471297   11967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:18.730115   11967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:18.932151   11967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:19.024826   11967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:19.144008   11967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:19.259815   11967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:19.260334   11967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:19.262678   11967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:19.264869   11967 out.go:235]   - Booting up control plane ...
	I0923 10:22:19.265001   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:19.265096   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:19.265162   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:19.273358   11967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:19.278617   11967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:19.278696   11967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:19.355589   11967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:19.355691   11967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:19.857077   11967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.535465ms
	I0923 10:22:19.857205   11967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:24.859100   11967 kubeadm.go:310] [api-check] The API server is healthy after 5.002044714s
	I0923 10:22:24.871015   11967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:24.881606   11967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:24.899928   11967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:24.900246   11967 kubeadm.go:310] [mark-control-plane] Marking the node addons-445250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:24.910024   11967 kubeadm.go:310] [bootstrap-token] Using token: tzcr7c.qy08ihjpsu8woy77
	I0923 10:22:24.911692   11967 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:24.911836   11967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:24.914938   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:24.920963   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:24.923728   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:24.926249   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:24.929913   11967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:25.266487   11967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:25.686706   11967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:26.267074   11967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:26.268161   11967 kubeadm.go:310] 
	I0923 10:22:26.268232   11967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:26.268246   11967 kubeadm.go:310] 
	I0923 10:22:26.268333   11967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:26.268349   11967 kubeadm.go:310] 
	I0923 10:22:26.268371   11967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:26.268443   11967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:26.268498   11967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:26.268503   11967 kubeadm.go:310] 
	I0923 10:22:26.268548   11967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:26.268555   11967 kubeadm.go:310] 
	I0923 10:22:26.268595   11967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:26.268602   11967 kubeadm.go:310] 
	I0923 10:22:26.268680   11967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:26.268775   11967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:26.268850   11967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:26.268858   11967 kubeadm.go:310] 
	I0923 10:22:26.268962   11967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:26.269039   11967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:26.269050   11967 kubeadm.go:310] 
	I0923 10:22:26.269125   11967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269229   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e \
	I0923 10:22:26.269251   11967 kubeadm.go:310] 	--control-plane 
	I0923 10:22:26.269256   11967 kubeadm.go:310] 
	I0923 10:22:26.269371   11967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:26.269381   11967 kubeadm.go:310] 
	I0923 10:22:26.269476   11967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269658   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e 
	I0923 10:22:26.271764   11967 kubeadm.go:310] W0923 10:22:16.250858    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272087   11967 kubeadm.go:310] W0923 10:22:16.251517    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272386   11967 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0923 10:22:26.272539   11967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:26.272573   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:26.272586   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:26.274792   11967 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:22:26.276289   11967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:22:26.279952   11967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:22:26.279967   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:22:26.296902   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:22:26.488183   11967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-445250 minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-445250 minikube.k8s.io/primary=true
	I0923 10:22:26.495506   11967 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:26.569029   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.069137   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.570004   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.069287   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.569763   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.069617   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.569788   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.069539   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.569838   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.639983   11967 kubeadm.go:1113] duration metric: took 4.1517431s to wait for elevateKubeSystemPrivileges
	I0923 10:22:30.640014   11967 kubeadm.go:394] duration metric: took 14.537124377s to StartCluster
	I0923 10:22:30.640032   11967 settings.go:142] acquiring lock: {Name:mk872f1d275188f797c9a12c8098849cd4e5cab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640127   11967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:22:30.640473   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/kubeconfig: {Name:mk157cbe356b4d3a0ed9cd6c04752524343ac891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640639   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:30.640656   11967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:30.640716   11967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:30.640836   11967 addons.go:69] Setting yakd=true in profile "addons-445250"
	I0923 10:22:30.640848   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.640862   11967 addons.go:234] Setting addon yakd=true in "addons-445250"
	I0923 10:22:30.640856   11967 addons.go:69] Setting ingress-dns=true in profile "addons-445250"
	I0923 10:22:30.640883   11967 addons.go:234] Setting addon ingress-dns=true in "addons-445250"
	I0923 10:22:30.640892   11967 addons.go:69] Setting gcp-auth=true in profile "addons-445250"
	I0923 10:22:30.640895   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640889   11967 addons.go:69] Setting default-storageclass=true in profile "addons-445250"
	I0923 10:22:30.640909   11967 mustload.go:65] Loading cluster: addons-445250
	I0923 10:22:30.640900   11967 addons.go:69] Setting cloud-spanner=true in profile "addons-445250"
	I0923 10:22:30.640917   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-445250"
	I0923 10:22:30.640906   11967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-445250"
	I0923 10:22:30.640934   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640949   11967 addons.go:69] Setting registry=true in profile "addons-445250"
	I0923 10:22:30.640966   11967 addons.go:234] Setting addon registry=true in "addons-445250"
	I0923 10:22:30.640971   11967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:30.640995   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640999   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641053   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.641257   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641280   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641366   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641379   11967 addons.go:69] Setting inspektor-gadget=true in profile "addons-445250"
	I0923 10:22:30.641392   11967 addons.go:234] Setting addon inspektor-gadget=true in "addons-445250"
	I0923 10:22:30.641415   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641426   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641435   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641870   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642158   11967 addons.go:69] Setting ingress=true in profile "addons-445250"
	I0923 10:22:30.642181   11967 addons.go:234] Setting addon ingress=true in "addons-445250"
	I0923 10:22:30.642196   11967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-445250"
	I0923 10:22:30.642213   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642215   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-445250"
	I0923 10:22:30.642394   11967 addons.go:69] Setting volcano=true in profile "addons-445250"
	I0923 10:22:30.642518   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642544   11967 addons.go:234] Setting addon volcano=true in "addons-445250"
	I0923 10:22:30.642576   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642679   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642705   11967 addons.go:69] Setting volumesnapshots=true in profile "addons-445250"
	I0923 10:22:30.642720   11967 addons.go:234] Setting addon volumesnapshots=true in "addons-445250"
	I0923 10:22:30.642741   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642878   11967 addons.go:69] Setting metrics-server=true in profile "addons-445250"
	I0923 10:22:30.642900   11967 addons.go:234] Setting addon metrics-server=true in "addons-445250"
	I0923 10:22:30.642925   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641369   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.640934   11967 addons.go:234] Setting addon cloud-spanner=true in "addons-445250"
	I0923 10:22:30.642998   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643036   11967 addons.go:69] Setting storage-provisioner=true in profile "addons-445250"
	I0923 10:22:30.643061   11967 addons.go:234] Setting addon storage-provisioner=true in "addons-445250"
	I0923 10:22:30.643085   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643519   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.648041   11967 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:30.648388   11967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-445250"
	I0923 10:22:30.648409   11967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-445250"
	I0923 10:22:30.648446   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.648949   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650296   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650451   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:30.665893   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666046   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666054   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.667975   11967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:30.669562   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:30.669584   11967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:30.669644   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.685327   11967 addons.go:234] Setting addon default-storageclass=true in "addons-445250"
	I0923 10:22:30.685375   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.685862   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.686064   11967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:30.687257   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.687775   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:30.687828   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:30.687898   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.698483   11967 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:30.700153   11967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:30.702006   11967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:30.702025   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:30.702098   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.711515   11967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:30.713056   11967 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:30.713078   11967 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:30.713145   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.714581   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:30.717305   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:30.718944   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:30.720619   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:30.722483   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:30.722584   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:30.724093   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:30.724118   11967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:30.724177   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.724581   11967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:30.726222   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:30.726536   11967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:30.726554   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:30.726622   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.730002   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:30.732162   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:30.733954   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:30.733974   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:30.734033   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.741467   11967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-445250"
	I0923 10:22:30.741546   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.742079   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.744624   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.747096   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:30.749664   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.757242   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.758538   11967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:30.759791   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:30.759812   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:30.759870   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.760711   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:30.760738   11967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:30.760797   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.757266   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.770117   11967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:30.770190   11967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0923 10:22:30.772541   11967 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:30.774529   11967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:30.774556   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:30.774621   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.775602   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:30.775624   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:30.775674   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.787036   11967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:30.787436   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.789020   11967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:30.789037   11967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:30.789092   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.790579   11967 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:30.792232   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:30.792253   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:30.792310   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.799243   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.802426   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.804649   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.807350   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.811528   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815735   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815938   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.817779   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.818866   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.821325   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	W0923 10:22:30.833869   11967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:22:30.833911   11967 retry.go:31] will retry after 251.502566ms: ssh: handshake failed: EOF
	I0923 10:22:30.930840   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:31.038430   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:31.130020   11967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:31.130106   11967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:31.148662   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:31.148713   11967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:31.247685   11967 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:31.247721   11967 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:31.329027   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:31.329056   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:31.329202   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:31.329335   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:31.329470   11967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.329484   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:31.339924   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:31.339949   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:31.342429   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:31.345422   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:31.346167   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:31.346186   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:31.437032   11967 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:31.437063   11967 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:31.440413   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:31.445193   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:31.527011   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:31.527097   11967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:31.546462   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.627051   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:31.627133   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:31.627825   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:31.627867   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:31.630751   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:31.630811   11967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:31.635963   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:31.636203   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:31.636238   11967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:31.639336   11967 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:31.639357   11967 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:31.826246   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:31.826334   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:31.826554   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:31.826604   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:31.827508   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:31.827551   11967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:31.930152   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:31.930233   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:31.946738   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:31.946849   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:32.031166   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:32.031251   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:32.038777   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:32.046798   11967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.115916979s)
	I0923 10:22:32.046977   11967 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:32.046913   11967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.008451247s)
	I0923 10:22:32.048977   11967 node_ready.go:35] waiting up to 6m0s for node "addons-445250" to be "Ready" ...
	I0923 10:22:32.127879   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:32.239522   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:32.239604   11967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:32.329601   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:32.329684   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:32.343924   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:32.344005   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:32.445635   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.11638761s)
	I0923 10:22:32.633750   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:32.633833   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:32.638879   11967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:32.638905   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:32.645923   11967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:32.645948   11967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:32.649688   11967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-445250" context rescaled to 1 replicas
	I0923 10:22:32.949369   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:32.949444   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:33.033520   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.704142355s)
	I0923 10:22:33.227536   11967 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.227561   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:33.239664   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:33.427133   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:33.427213   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:33.532321   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.727155   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:33.727249   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:34.046873   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:34.046911   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:34.149845   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:34.233755   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:35.227770   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.885253813s)
	I0923 10:22:35.227973   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.882480333s)
	I0923 10:22:36.338824   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.898369608s)
	I0923 10:22:36.338860   11967 addons.go:475] Verifying addon ingress=true in "addons-445250"
	I0923 10:22:36.339007   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.792498678s)
	I0923 10:22:36.339038   11967 addons.go:475] Verifying addon registry=true in "addons-445250"
	I0923 10:22:36.339090   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.703092374s)
	I0923 10:22:36.338954   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.893724471s)
	I0923 10:22:36.339140   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.300273199s)
	I0923 10:22:36.339204   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.211246073s)
	I0923 10:22:36.340459   11967 addons.go:475] Verifying addon metrics-server=true in "addons-445250"
	I0923 10:22:36.340949   11967 out.go:177] * Verifying registry addon...
	I0923 10:22:36.340964   11967 out.go:177] * Verifying ingress addon...
	I0923 10:22:36.342063   11967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-445250 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:36.343731   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:36.343939   11967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:36.348954   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:36.348976   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.350483   11967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:36.350503   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.554792   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:36.933962   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.937073   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.041012   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.801300714s)
	W0923 10:22:37.041067   11967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041095   11967 retry.go:31] will retry after 370.601258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041141   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.508711885s)
	I0923 10:22:37.291210   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.057397179s)
	I0923 10:22:37.291243   11967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:37.293123   11967 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:37.295283   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:37.330959   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:37.330988   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.411870   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:37.431984   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.432434   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.799504   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.846652   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.847318   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.894861   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:37.894922   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:37.910904   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.037420   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:38.132707   11967 addons.go:234] Setting addon gcp-auth=true in "addons-445250"
	I0923 10:22:38.132762   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:38.133409   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:38.167105   11967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:38.167160   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:38.184042   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.329969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.348850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.349827   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.798829   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.847385   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.847868   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.052033   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:39.298764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.347022   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.347523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.828069   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.847719   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.848042   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.039490   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.627570811s)
	I0923 10:22:40.039578   11967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.872440211s)
	I0923 10:22:40.042091   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:40.043723   11967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:40.045225   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:40.045257   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:40.064228   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:40.064253   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:40.082222   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.082246   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:40.136907   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.329111   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.347816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.348314   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.743497   11967 addons.go:475] Verifying addon gcp-auth=true in "addons-445250"
	I0923 10:22:40.745653   11967 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:40.747983   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:40.750552   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:40.750569   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:40.851357   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.851626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.852043   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.052135   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:41.252005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.298689   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.347279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.347602   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.798093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.846555   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.847144   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.250929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.298461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.346929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.347234   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.750863   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.798245   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.846557   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.847000   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.251127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.355618   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.356062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.356298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.552536   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:43.751077   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.798734   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.847103   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.847581   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.251644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.298309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.346771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.347034   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.750721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.798594   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.847101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.847535   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.299044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.347547   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.348189   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.750705   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.798232   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.846841   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.847120   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.052259   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:46.250899   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.298397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.347430   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.798438   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.846533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.846987   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.250492   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.298879   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.347127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.347819   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.751310   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.847226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.847773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.251448   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.298949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.347317   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.551694   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:48.752137   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.798472   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.846972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.847400   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.251341   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.298972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.347471   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.347951   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.750703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.799078   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.847429   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.847812   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.251408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.298942   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.347421   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.347893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.552081   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:50.750626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.798173   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.847276   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.848032   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.251477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.298961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.347458   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.347867   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.750664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.798274   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.846535   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.847185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.250749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.298515   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.346957   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.347409   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.552403   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:52.751135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.798654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.847072   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.847476   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.251020   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.298711   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.347029   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.347626   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.751732   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.798241   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.846461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.846962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.250842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.298271   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.346627   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.346949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.750779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.798482   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.846682   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.847175   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.052413   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:55.251076   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.298677   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.347003   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.347743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.798538   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.847067   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.847484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.251150   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.298943   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.347453   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.347896   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.751095   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.798596   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.846745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.847179   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.250839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.298505   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.347074   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.347506   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.551579   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:57.751148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.798529   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.846924   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.847369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.251170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.298665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.347156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.347556   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.751622   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.798291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.846556   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.847159   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.251703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.298260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.346762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.552250   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:59.750656   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.798196   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.846497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.846841   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.298537   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.347146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.347462   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.798720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.846991   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.847390   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.251092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.298651   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.346885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.347266   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.552532   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:01.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.798797   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.847134   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.847636   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.251414   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.299046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.346776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.346976   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.751210   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.798665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.847041   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.847588   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.298846   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.347477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.347937   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.751580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.797877   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.847450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.847870   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.052155   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:04.250974   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.298589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.346910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.751530   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.799567   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.846770   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.847184   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.251007   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.298445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.347135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.347527   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.751388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.799093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.847646   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.848031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.052367   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:06.250840   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.298387   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.346761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.347238   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.751720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.798318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.846779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.847219   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.251318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.298911   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.347408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.347769   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.751469   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.798992   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.847606   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.847853   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.251235   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.298906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.347450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.348057   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.552198   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:08.750869   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.798365   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.846408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.846765   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.251760   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.298434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.346956   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.347369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.750692   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.798046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.847526   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.848062   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.250707   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.298206   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.346577   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.346962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.552617   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:10.750936   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.798405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.846773   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.847081   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.250576   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.298011   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.347813   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.750864   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.798382   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.846687   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.847174   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.250954   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.298486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.347499   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.552672   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:12.751565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.798263   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.846609   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.847288   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.250649   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.298224   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.346581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.347009   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.750948   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.798498   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.846756   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.847196   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.250875   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.298430   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.346812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.347181   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.757534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.831755   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.863298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.863307   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:23:14.863337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.054652   11967 node_ready.go:49] node "addons-445250" has status "Ready":"True"
	I0923 10:23:15.054684   11967 node_ready.go:38] duration metric: took 43.005633575s for node "addons-445250" to be "Ready" ...
	I0923 10:23:15.054698   11967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:15.138931   11967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:15.251452   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.301612   11967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:23:15.301637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.427434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.428141   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.753427   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.855252   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.855518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.855536   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.143869   11967 pod_ready.go:93] pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.143890   11967 pod_ready.go:82] duration metric: took 1.004925199s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.143908   11967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147771   11967 pod_ready.go:93] pod "etcd-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.147797   11967 pod_ready.go:82] duration metric: took 3.880973ms for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147813   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151340   11967 pod_ready.go:93] pod "kube-apiserver-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.151360   11967 pod_ready.go:82] duration metric: took 3.538721ms for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151379   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154908   11967 pod_ready.go:93] pod "kube-controller-manager-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.154925   11967 pod_ready.go:82] duration metric: took 3.540171ms for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154937   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.251122   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.252541   11967 pod_ready.go:93] pod "kube-proxy-wkmtk" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.252560   11967 pod_ready.go:82] duration metric: took 97.616289ms for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.252569   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.298885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.346935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.347232   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.652929   11967 pod_ready.go:93] pod "kube-scheduler-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.652958   11967 pod_ready.go:82] duration metric: took 400.380255ms for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.652971   11967 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.751305   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.799551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.847949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.848185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.251997   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.299328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.347771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.348037   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.751574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.799610   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.848015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.848650   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.250930   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.299312   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.347764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.348433   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.659062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:18.752418   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.799730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.847222   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.847395   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.251503   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.299737   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.347618   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.799536   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.847668   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.847798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.251502   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.299967   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.347574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.347946   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.752027   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.799582   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.847709   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.848055   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.159098   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.252132   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.299964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.346779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.347020   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.755745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.858762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.859520   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.860194   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.251409   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.300044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.346882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.347235   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.751664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.852777   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.853039   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.853226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.251068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.299520   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.347578   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.347931   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.658413   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:23.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.851935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.852480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.852589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.251460   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.299593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.347663   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.348012   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.752139   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.829769   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.848533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.848714   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.250787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.299026   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.347280   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.347450   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.751684   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.852481   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.852917   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.853012   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.158564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:26.251164   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.299637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.346953   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.751177   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.800249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.900081   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.900480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.251580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.299779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.352497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.353041   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.751475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.853114   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.853317   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.853731   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.158745   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:28.251214   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.298730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.347152   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.347334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.751028   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.851629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.852256   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.852278   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.251249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.299788   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.347140   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.347661   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.752405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.800154   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.846882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.847331   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.251406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.300215   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.347641   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.658131   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:30.751454   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.800486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.847123   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.847735   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.252032   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.300096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.352870   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.353371   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.751766   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.804056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.847133   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.847758   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.251744   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.299223   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.347388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.347653   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.751592   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.799179   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.847018   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.847414   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.159363   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:33.251561   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.298927   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.347523   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.751641   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.798959   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.847153   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.847494   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.251511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.329090   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.346926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.347136   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.852511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.853626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.853680   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.252182   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.299378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.347693   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.348033   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.658931   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:35.752279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.800092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.852945   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.853579   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.300230   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.400564   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.400859   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.750934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.799444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.848021   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.848290   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.251588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.300049   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.352506   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.352773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.750964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.799890   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.852354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.852606   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.158705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:38.251162   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.299581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.348004   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.348356   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.751115   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.799410   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.848082   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.848190   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.251584   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.347098   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.347599   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.751816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.799402   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.847842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.848902   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.251286   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.299565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.347972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.348284   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.659317   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:40.751324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.829307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.847208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.847905   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.251435   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.328874   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.347851   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.348180   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.751765   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.799242   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.847826   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.848244   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.252250   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.299996   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.348350   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.348560   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.751101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.828445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.847541   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.158526   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:43.251605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.331532   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.347791   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.348403   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.848295   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.850906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.850993   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.251344   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.299676   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.348302   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.348644   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.751195   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.799580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.847459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.847814   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.251761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.299534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.347837   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.348259   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.658244   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:45.751109   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.799378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.847844   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.848484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.250862   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.299309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.347588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.347881   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.752096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.830819   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.849324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.849449   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.251489   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.329696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.352328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.352675   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.659565   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:47.751839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.799363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.847629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.848160   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.251322   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.299644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.348437   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.349011   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.751971   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.799208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.847311   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.847677   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.252345   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.353336   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.354113   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.354290   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.751696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.798850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.847044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.847294   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.159412   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:50.251315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.300359   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.347407   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.347852   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.752148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.853086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.853794   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.853937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.251808   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.299349   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.347605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.347803   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.770445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.800059   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.847168   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.847523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.252094   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.299496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.347921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.348258   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.658499   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:52.751149   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.799391   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.847917   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.251084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.299459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.348165   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.349134   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.751749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.799085   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.847146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.847698   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.251525   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.299814   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.347087   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.658586   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:54.751738   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.798920   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.847135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.847497   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.251534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.299916   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.347315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.347570   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.751726   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.799056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.847243   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.847517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.250904   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.329860   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.347928   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.348157   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.659564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:56.751715   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.798895   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.848713   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.849087   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.327397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.330171   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.347623   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.349031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.752514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.831070   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.849260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.929421   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.251507   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.329086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.348239   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.349299   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.659625   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:58.751107   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.828912   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.848131   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.848674   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.251980   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.329647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.347593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.348472   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.751659   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.799518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.847937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.848242   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.251754   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.299551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.348376   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.348776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.751910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.799228   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.847852   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.848386   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.159636   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:01.251545   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.300654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.347444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.347798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.751291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.799969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.847062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.847151   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.250921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.299432   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.347701   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.751456   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.799637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.846847   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.847345   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.251056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.299408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.349455   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.349496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.658044   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:03.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.800475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.847815   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.848013   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.251337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.347301   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.347740   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.751934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.828237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.847307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.847974   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.252170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.328300   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.347284   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.347600   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.658958   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:05.752071   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.853170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.853743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.853986   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.252000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.328730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.347793   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:06.348249   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.751665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.828961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.849117   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.849787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.251616   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.300647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.347543   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.348597   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.771812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.876237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.876559   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.877562   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.159319   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:08.251653   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.299807   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.348721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.348927   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:08.752006   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.799289   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.847514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.847770   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.251104   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.299398   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.347380   11967 kapi.go:107] duration metric: took 1m33.003646242s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:24:09.347767   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.751748   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.800319   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.847334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.251156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.299664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.348059   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.658634   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:10.750897   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.799121   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.847887   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.251008   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.299420   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.348466   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.750925   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.251825   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.299403   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.347748   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.751282   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.800000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.847468   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.159267   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:13.251700   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.299065   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.347829   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.752005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.799406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.853893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.253633   11967 kapi.go:107] duration metric: took 1m33.505659378s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:14.257404   11967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-445250 cluster.
	I0923 10:24:14.258882   11967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:14.260323   11967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:14.299938   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.348717   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.799563   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.847849   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.329354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.347994   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.658992   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:15.799926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.847969   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.299363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.348302   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.799654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.848799   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.299696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.348435   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.659051   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:17.799970   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.848268   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.300125   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.400393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.799588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.300200   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.348989   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.799189   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.847633   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.166062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:20.330843   11967 kapi.go:107] duration metric: took 1m43.035557511s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:20.348554   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.848824   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.348354   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.848082   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.348802   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.659419   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:22.847751   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.348517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.848949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.347848   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.848694   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:25.158725   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:25.348583   11967 kapi.go:107] duration metric: took 1m49.004639978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:25.350870   11967 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0923 10:24:25.353079   11967 addons.go:510] duration metric: took 1m54.712359706s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner metrics-server yakd inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0923 10:24:27.658306   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:29.658584   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:32.158410   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:34.657759   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:36.658121   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:38.658705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:40.659320   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:41.658677   11967 pod_ready.go:93] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.658710   11967 pod_ready.go:82] duration metric: took 1m25.005729374s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.658725   11967 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663462   11967 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.663484   11967 pod_ready.go:82] duration metric: took 4.751466ms for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663503   11967 pod_ready.go:39] duration metric: took 1m26.60878964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:24:41.663521   11967 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:24:41.663567   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:41.663611   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:41.696491   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:41.696517   11967 cri.go:89] found id: ""
	I0923 10:24:41.696526   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:41.696575   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.699787   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:41.699845   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:41.732611   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:41.732632   11967 cri.go:89] found id: ""
	I0923 10:24:41.732641   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:41.732680   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.736045   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:41.736113   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:41.768329   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:41.768360   11967 cri.go:89] found id: ""
	I0923 10:24:41.768370   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:41.768426   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.771643   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:41.771702   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:41.805603   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:41.805627   11967 cri.go:89] found id: ""
	I0923 10:24:41.805637   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:41.805686   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.808896   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:41.808968   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:41.843211   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:41.843234   11967 cri.go:89] found id: ""
	I0923 10:24:41.843242   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:41.843293   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.846569   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:41.846631   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:41.878951   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:41.878969   11967 cri.go:89] found id: ""
	I0923 10:24:41.878977   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:41.879015   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.882160   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:41.882216   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:41.913249   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:41.913273   11967 cri.go:89] found id: ""
	I0923 10:24:41.913281   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:41.913337   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.916358   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:41.916384   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:41.962291   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962472   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:41.962607   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962764   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.000201   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:42.000236   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:42.033282   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:42.033307   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:42.074054   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:42.074089   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:42.107707   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:42.107734   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:42.144872   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:42.144926   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:42.199993   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:42.200024   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:42.234245   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:42.234274   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:42.246004   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:42.246038   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:42.353925   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:42.353954   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:42.444039   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:42.444069   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:42.488688   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:42.488720   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:42.565082   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565110   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:42.565165   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:42.565173   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565180   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:42.565191   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565197   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.565201   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565206   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:52.566001   11967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:24:52.579983   11967 api_server.go:72] duration metric: took 2m21.939291421s to wait for apiserver process to appear ...
	I0923 10:24:52.580014   11967 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:24:52.580048   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:52.580103   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:52.613694   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.613720   11967 cri.go:89] found id: ""
	I0923 10:24:52.613729   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:52.613775   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.617041   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:52.617099   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:52.649762   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:52.649781   11967 cri.go:89] found id: ""
	I0923 10:24:52.649788   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:52.649852   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.653130   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:52.653186   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:52.685749   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:52.685769   11967 cri.go:89] found id: ""
	I0923 10:24:52.685775   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:52.685813   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.688875   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:52.688931   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:52.721693   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:52.721716   11967 cri.go:89] found id: ""
	I0923 10:24:52.721723   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:52.721772   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.725081   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:52.725136   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:52.759437   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:52.759464   11967 cri.go:89] found id: ""
	I0923 10:24:52.759474   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:52.759530   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.762872   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:52.762937   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:52.797876   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:52.797893   11967 cri.go:89] found id: ""
	I0923 10:24:52.797900   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:52.797940   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.801151   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:52.801201   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:52.833315   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:52.833339   11967 cri.go:89] found id: ""
	I0923 10:24:52.833346   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:52.833387   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.836655   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:52.836681   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:52.927959   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:52.927988   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.970219   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:52.970246   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:53.005352   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:53.005388   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:53.043256   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:53.043284   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:53.097302   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:53.097340   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:53.173928   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:53.173959   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:53.214820   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:53.214848   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:53.226459   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:53.226486   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:53.269173   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:53.269204   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:53.302182   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:53.302257   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:53.338936   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:53.338965   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:53.384315   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384503   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.384632   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384787   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422192   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422221   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:53.422272   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:53.422279   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422286   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.422294   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422303   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422308   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422314   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:03.423825   11967 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:25:03.428133   11967 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:25:03.428969   11967 api_server.go:141] control plane version: v1.31.1
	I0923 10:25:03.428992   11967 api_server.go:131] duration metric: took 10.848971435s to wait for apiserver health ...
	I0923 10:25:03.429000   11967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:25:03.429020   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:25:03.429067   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:25:03.463555   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:03.463573   11967 cri.go:89] found id: ""
	I0923 10:25:03.463582   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:25:03.463622   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.466867   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:25:03.466923   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:25:03.498838   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:03.498862   11967 cri.go:89] found id: ""
	I0923 10:25:03.498870   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:25:03.498916   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.502169   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:25:03.502224   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:25:03.535181   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:03.535202   11967 cri.go:89] found id: ""
	I0923 10:25:03.535211   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:25:03.535260   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.538506   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:25:03.538568   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:25:03.571929   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:03.571954   11967 cri.go:89] found id: ""
	I0923 10:25:03.571963   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:25:03.572007   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.575352   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:25:03.575421   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:25:03.608263   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:03.608286   11967 cri.go:89] found id: ""
	I0923 10:25:03.608296   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:25:03.608353   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.611725   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:25:03.611781   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:25:03.643940   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.643974   11967 cri.go:89] found id: ""
	I0923 10:25:03.643985   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:25:03.644031   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.647205   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:25:03.647259   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:25:03.680120   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:03.680145   11967 cri.go:89] found id: ""
	I0923 10:25:03.680155   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:25:03.680197   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.683474   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:25:03.683500   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:25:03.783529   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:25:03.783558   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.838870   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:25:03.838909   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:25:03.879312   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:25:03.879343   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:25:03.925363   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925562   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:03.925696   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925851   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:03.966109   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:25:03.966148   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:25:03.978653   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:25:03.978691   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:04.012260   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:25:04.012287   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:04.049729   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:25:04.049759   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:04.082626   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:25:04.082662   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:04.117339   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:25:04.117364   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:25:04.188147   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:25:04.188192   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:04.230982   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:25:04.231014   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:04.275512   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275542   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:25:04.275603   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:25:04.275611   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275621   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:04.275632   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275639   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:04.275644   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275655   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:14.287581   11967 system_pods.go:59] 18 kube-system pods found
	I0923 10:25:14.287615   11967 system_pods.go:61] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.287621   11967 system_pods.go:61] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.287624   11967 system_pods.go:61] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.287628   11967 system_pods.go:61] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.287631   11967 system_pods.go:61] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.287634   11967 system_pods.go:61] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.287638   11967 system_pods.go:61] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.287641   11967 system_pods.go:61] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.287646   11967 system_pods.go:61] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.287649   11967 system_pods.go:61] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.287652   11967 system_pods.go:61] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.287656   11967 system_pods.go:61] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.287661   11967 system_pods.go:61] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.287666   11967 system_pods.go:61] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.287672   11967 system_pods.go:61] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.287675   11967 system_pods.go:61] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.287681   11967 system_pods.go:61] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.287685   11967 system_pods.go:61] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.287693   11967 system_pods.go:74] duration metric: took 10.858688236s to wait for pod list to return data ...
	I0923 10:25:14.287702   11967 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:25:14.289991   11967 default_sa.go:45] found service account: "default"
	I0923 10:25:14.290010   11967 default_sa.go:55] duration metric: took 2.299912ms for default service account to be created ...
	I0923 10:25:14.290018   11967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:25:14.298150   11967 system_pods.go:86] 18 kube-system pods found
	I0923 10:25:14.298176   11967 system_pods.go:89] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.298181   11967 system_pods.go:89] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.298185   11967 system_pods.go:89] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.298188   11967 system_pods.go:89] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.298192   11967 system_pods.go:89] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.298196   11967 system_pods.go:89] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.298200   11967 system_pods.go:89] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.298205   11967 system_pods.go:89] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.298208   11967 system_pods.go:89] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.298212   11967 system_pods.go:89] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.298218   11967 system_pods.go:89] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.298222   11967 system_pods.go:89] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.298227   11967 system_pods.go:89] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.298230   11967 system_pods.go:89] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.298236   11967 system_pods.go:89] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.298239   11967 system_pods.go:89] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.298244   11967 system_pods.go:89] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.298247   11967 system_pods.go:89] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.298253   11967 system_pods.go:126] duration metric: took 8.230518ms to wait for k8s-apps to be running ...
	I0923 10:25:14.298262   11967 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:25:14.298303   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:25:14.309069   11967 system_svc.go:56] duration metric: took 10.799947ms WaitForService to wait for kubelet
	I0923 10:25:14.309093   11967 kubeadm.go:582] duration metric: took 2m43.668407459s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:25:14.309111   11967 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:25:14.312018   11967 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:25:14.312045   11967 node_conditions.go:123] node cpu capacity is 8
	I0923 10:25:14.312058   11967 node_conditions.go:105] duration metric: took 2.941824ms to run NodePressure ...
	I0923 10:25:14.312068   11967 start.go:241] waiting for startup goroutines ...
	I0923 10:25:14.312077   11967 start.go:246] waiting for cluster config update ...
	I0923 10:25:14.312094   11967 start.go:255] writing updated cluster config ...
	I0923 10:25:14.312343   11967 ssh_runner.go:195] Run: rm -f paused
	I0923 10:25:14.359947   11967 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:25:14.362510   11967 out.go:177] * Done! kubectl is now configured to use "addons-445250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:34:25 addons-445250 crio[1027]: time="2024-09-23 10:34:25.726612900Z" level=info msg="Removed pod sandbox: c89d4dc541aa7329bb95ac13baf10555f99a6e7ea7996d6f2333a3e7f31dc5bc" id=5556b553-bcb3-4beb-9df5-c2f6ca53e0dd name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.317137550Z" level=info msg="Stopping pod sandbox: dcaffd40c3d8f905a292fb27b5288e92541d7491cbca7215d1040b6f0bd54ae5" id=fbf1ea90-b0b8-486b-9273-7fa1616641d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.317393142Z" level=info msg="Got pod network &{Name:registry-test Namespace:default ID:dcaffd40c3d8f905a292fb27b5288e92541d7491cbca7215d1040b6f0bd54ae5 UID:41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418 NetNS:/var/run/netns/b1b2d705-3aab-48cc-92c9-c0884c54fdac Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.317536605Z" level=info msg="Deleting pod default_registry-test from CNI network \"kindnet\" (type=ptp)"
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.356603365Z" level=info msg="Stopped pod sandbox: dcaffd40c3d8f905a292fb27b5288e92541d7491cbca7215d1040b6f0bd54ae5" id=fbf1ea90-b0b8-486b-9273-7fa1616641d2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.917451688Z" level=info msg="Stopping container: 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb (timeout: 30s)" id=4168509f-9c76-492e-bbba-f4b638a8c6d8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:34:27 addons-445250 crio[1027]: time="2024-09-23 10:34:27.926012871Z" level=info msg="Stopping container: 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999 (timeout: 30s)" id=9dc3f5ee-164c-42a6-9f32-3efae0bd6bf9 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:34:27 addons-445250 conmon[4280]: conmon 76287a05db6b97b6b806 <ninfo>: container 4292 exited with status 2
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.056544619Z" level=info msg="Stopped container 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb: kube-system/registry-66c9cd494c-nrpsw/registry" id=4168509f-9c76-492e-bbba-f4b638a8c6d8 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.057098570Z" level=info msg="Stopping pod sandbox: 79d93d4fddc4a2a12193bcac1f0898e51b7516c95cf02c63d47d99861e813196" id=c0260b9e-f86c-4972-800f-91c8e5f35864 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.057356297Z" level=info msg="Got pod network &{Name:registry-66c9cd494c-nrpsw Namespace:kube-system ID:79d93d4fddc4a2a12193bcac1f0898e51b7516c95cf02c63d47d99861e813196 UID:40d0085a-ea70-4052-ad07-a26bb7092539 NetNS:/var/run/netns/3600ffaa-c3b8-408e-bba5-7c5ad40b80c3 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.057543583Z" level=info msg="Deleting pod kube-system_registry-66c9cd494c-nrpsw from CNI network \"kindnet\" (type=ptp)"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.069476251Z" level=info msg="Stopped container 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999: kube-system/registry-proxy-gnlc5/registry-proxy" id=9dc3f5ee-164c-42a6-9f32-3efae0bd6bf9 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.070128724Z" level=info msg="Stopping pod sandbox: 7d90702200773dcfae80520ea21ce5e63dd301d2d1bb29b35dbdf19b454ae5a1" id=90f572ba-1643-48f8-89ec-350c926d7d8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.074422042Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-YR5BH7NWHISXU5A4 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n:KUBE-HP-JH5HFPCV6DIVZSQY - [0:0]\n:KUBE-HP-GHTLDBWI732LUHBH - [0:0]\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-ae04-f97444d65783_0_ hostport 443\" -m tcp --dport 443 -j KUBE-HP-JH5HFPCV6DIVZSQY\n-A KUBE-HOSTPORTS -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-ae04-f97444d65783_0_ hostport 80\" -m tcp --dport 80 -j KUBE-HP-YR5BH7NWHISXU5A4\n-A KUBE-HP-JH5HFPCV6DIVZSQY -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-ae04-f97444d65783_0_ hostport 443\" -j KUBE-MARK-MASQ\n-A KUBE-HP-JH5HFPCV6DIVZSQY -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-a
e04-f97444d65783_0_ hostport 443\" -m tcp -j DNAT --to-destination 10.244.0.20:443\n-A KUBE-HP-YR5BH7NWHISXU5A4 -s 10.244.0.20/32 -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-ae04-f97444d65783_0_ hostport 80\" -j KUBE-MARK-MASQ\n-A KUBE-HP-YR5BH7NWHISXU5A4 -p tcp -m comment --comment \"k8s_ingress-nginx-controller-bc57996ff-p4lgm_ingress-nginx_0501f316-a471-4550-ae04-f97444d65783_0_ hostport 80\" -m tcp -j DNAT --to-destination 10.244.0.20:80\n-X KUBE-HP-GHTLDBWI732LUHBH\nCOMMIT\n"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.076880945Z" level=info msg="Closing host port tcp:5000"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.078601314Z" level=info msg="Host port tcp:5000 does not have an open socket"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.078771683Z" level=info msg="Got pod network &{Name:registry-proxy-gnlc5 Namespace:kube-system ID:7d90702200773dcfae80520ea21ce5e63dd301d2d1bb29b35dbdf19b454ae5a1 UID:d7382df4-3be8-48d0-9dcb-8cb5cc78647c NetNS:/var/run/netns/4c26c162-e7a5-4972-b8d3-512ed402120a Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.078885906Z" level=info msg="Deleting pod kube-system_registry-proxy-gnlc5 from CNI network \"kindnet\" (type=ptp)"
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.094904901Z" level=info msg="Stopped pod sandbox: 79d93d4fddc4a2a12193bcac1f0898e51b7516c95cf02c63d47d99861e813196" id=c0260b9e-f86c-4972-800f-91c8e5f35864 name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.127097672Z" level=info msg="Stopped pod sandbox: 7d90702200773dcfae80520ea21ce5e63dd301d2d1bb29b35dbdf19b454ae5a1" id=90f572ba-1643-48f8-89ec-350c926d7d8f name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.511792565Z" level=info msg="Removing container: 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999" id=a38dace7-7739-4e4e-8c13-0e7c822c25e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.525104763Z" level=info msg="Removed container 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999: kube-system/registry-proxy-gnlc5/registry-proxy" id=a38dace7-7739-4e4e-8c13-0e7c822c25e6 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.526810444Z" level=info msg="Removing container: 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb" id=04223080-fb42-4dcb-b710-055538798fe5 name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 10:34:28 addons-445250 crio[1027]: time="2024-09-23 10:34:28.542126655Z" level=info msg="Removed container 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb: kube-system/registry-66c9cd494c-nrpsw/registry" id=04223080-fb42-4dcb-b710-055538798fe5 name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                       ATTEMPT             POD ID              POD
	792eb631ff890       a416a98b71e224a31ee99cff8e16063554498227d2b696152a9c3e0aa65e5824                                                             5 seconds ago       Exited              helper-pod                 0                   47c1de41c6dfa       helper-pod-delete-pvc-f2f3f271-6db1-4176-931b-e93dd714c1c9
	9b9d147b1d7d7       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              42 seconds ago      Running             nginx                      0                   c86cb59ddb3ca       nginx
	4694d204eb1ea       registry.k8s.io/ingress-nginx/controller@sha256:401d25a09ee8fe9fd9d33c5051531e8ebfa4ded95ff09830af8cc48c8e5aeaa6             10 minutes ago      Running             controller                 0                   b5b8e48a5b762       ingress-nginx-controller-bc57996ff-p4lgm
	f43878fce15a7       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             10 minutes ago      Exited              patch                      3                   fe27de295d179       ingress-nginx-admission-patch-4wv4b
	595e24a79c3cc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 10 minutes ago      Running             gcp-auth                   0                   269c70f2ed966       gcp-auth-89d5ffd79-wh69l
	2751d0445bd9a       nvcr.io/nvidia/k8s-device-plugin@sha256:ed39e22c8b71343fb996737741a99da88ce6c75dd83b5c520e0b3d8e8a884c47                     10 minutes ago      Running             nvidia-device-plugin-ctr   0                   c98f8f7af1118       nvidia-device-plugin-daemonset-649c2
	d5868858343b4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   10 minutes ago      Exited              create                     0                   c3d692cdaaff9       ingress-nginx-admission-create-8v7x6
	d86adcd030248       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:07c8f5b205a3f8971bfc6d460978ae00de35f17e5d5392b1de8de02356f85dab             10 minutes ago      Running             minikube-ingress-dns       0                   1ce8603496108       kube-ingress-dns-minikube
	26fbe31bfc2e3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        10 minutes ago      Running             metrics-server             0                   060b6c8c02d4c       metrics-server-84c5f94fbc-7csnr
	4e4caabf26ecb       docker.io/rancher/local-path-provisioner@sha256:73f712e7af12b06720c35ce75217f904f00e4bd96de79f8db1cf160112e667ef             11 minutes ago      Running             local-path-provisioner     0                   a256f0f7ef207       local-path-provisioner-86d989889c-td5pk
	ded51b2a91a2a       gcr.io/cloud-spanner-emulator/emulator@sha256:be105fc4b12849783aa20d987a35b86ed5296669595f8a7b2d79ad0cd8e193bf               11 minutes ago      Running             cloud-spanner-emulator     0                   e7c77b9f9ecae       cloud-spanner-emulator-5b584cc74-rztwp
	1ebaed16470de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             11 minutes ago      Running             coredns                    0                   8b47c72a2e89f       coredns-7c65d6cfc9-fx58w
	66c2617c6cdee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             11 minutes ago      Running             storage-provisioner        0                   ca64b60aaf77d       storage-provisioner
	60d69acfd0786       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             11 minutes ago      Running             kube-proxy                 0                   8b3d1fd790d7d       kube-proxy-wkmtk
	3fc705a9a7747       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             11 minutes ago      Running             kindnet-cni                0                   16dd7a97e2486       kindnet-dzbp5
	5a7d4dfeab76c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             12 minutes ago      Running             etcd                       0                   d78357fa957f5       etcd-addons-445250
	3fc6d875aa953       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             12 minutes ago      Running             kube-controller-manager    0                   b238baa295476       kube-controller-manager-addons-445250
	5e1692605ef5b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             12 minutes ago      Running             kube-scheduler             0                   1912f3295ca7d       kube-scheduler-addons-445250
	8b87d8d2ee711       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             12 minutes ago      Running             kube-apiserver             0                   f275d2a0ce43d       kube-apiserver-addons-445250
	
	
	==> coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] <==
	[INFO] 10.244.0.17:51021 - 2201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091016s
	[INFO] 10.244.0.17:48133 - 44271 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049343s
	[INFO] 10.244.0.17:48133 - 55785 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088427s
	[INFO] 10.244.0.17:49831 - 11625 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004641559s
	[INFO] 10.244.0.17:49831 - 53357 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.008874643s
	[INFO] 10.244.0.17:47951 - 29897 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004484598s
	[INFO] 10.244.0.17:47951 - 12748 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.01442901s
	[INFO] 10.244.0.17:48028 - 15319 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004123886s
	[INFO] 10.244.0.17:48028 - 211 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004165972s
	[INFO] 10.244.0.17:47195 - 44952 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070798s
	[INFO] 10.244.0.17:47195 - 64917 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141683s
	[INFO] 10.244.0.19:37440 - 47006 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160757s
	[INFO] 10.244.0.19:51770 - 28058 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235131s
	[INFO] 10.244.0.19:37999 - 57631 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117212s
	[INFO] 10.244.0.19:60851 - 28099 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164334s
	[INFO] 10.244.0.19:60473 - 52842 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127623s
	[INFO] 10.244.0.19:60093 - 46732 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183998s
	[INFO] 10.244.0.19:59180 - 21854 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005303324s
	[INFO] 10.244.0.19:53723 - 13226 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006472921s
	[INFO] 10.244.0.19:57517 - 53934 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004844258s
	[INFO] 10.244.0.19:37603 - 62628 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007796574s
	[INFO] 10.244.0.19:52499 - 62644 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004780066s
	[INFO] 10.244.0.19:43363 - 37803 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005512487s
	[INFO] 10.244.0.19:50641 - 54574 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.000695895s
	[INFO] 10.244.0.19:42118 - 61953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.000813877s
	
	
	==> describe nodes <==
	Name:               addons-445250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-445250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-445250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-445250
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-445250
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:34:20 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:33:59 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:33:59 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:33:59 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:33:59 +0000   Mon, 23 Sep 2024 10:23:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-445250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 98cd57bf5c0b47f391b0c0e0a30c5e14
	  System UUID:                64a901d1-6ec3-40d1-a503-55d7681a31ba
	  Boot ID:                    7fc2d313-9727-4ab1-967f-13a3c84ada15
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (17 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m15s
	  default                     cloud-spanner-emulator-5b584cc74-rztwp      0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     nginx                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         51s
	  gcp-auth                    gcp-auth-89d5ffd79-wh69l                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  ingress-nginx               ingress-nginx-controller-bc57996ff-p4lgm    100m (1%)     0 (0%)      90Mi (0%)        0 (0%)         11m
	  kube-system                 coredns-7c65d6cfc9-fx58w                    100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     11m
	  kube-system                 etcd-addons-445250                          100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         12m
	  kube-system                 kindnet-dzbp5                               100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      11m
	  kube-system                 kube-apiserver-addons-445250                250m (3%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-controller-manager-addons-445250       200m (2%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 kube-ingress-dns-minikube                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-proxy-wkmtk                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 kube-scheduler-addons-445250                100m (1%)     0 (0%)      0 (0%)           0 (0%)         12m
	  kube-system                 metrics-server-84c5f94fbc-7csnr             100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         11m
	  kube-system                 nvidia-device-plugin-daemonset-649c2        0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  local-path-storage          local-path-provisioner-86d989889c-td5pk     0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (13%)  100m (1%)
	  memory             510Mi (1%)   220Mi (0%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 12m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 12m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  12m   kubelet          Node addons-445250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    12m   kubelet          Node addons-445250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     12m   kubelet          Node addons-445250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           12m   node-controller  Node addons-445250 event: Registered Node addons-445250 in Controller
	  Normal   NodeReady                11m   kubelet          Node addons-445250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.410512] i8042: Warning: Keylock active
	[  +0.008382] device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
	[  +0.003589] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001035] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000753] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001022] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000710] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000747] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.635766] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.213677] kauditd_printk_skb: 46 callbacks suppressed
	[Sep23 10:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +1.023987] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +2.019762] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[Sep23 10:34] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +8.191064] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[ +16.126232] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	
	
	==> etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] <==
	{"level":"warn","ts":"2024-09-23T10:22:32.950276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.636458Z","time spent":"313.791137ms","remote":"127.0.0.1:48942","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":689,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.17f7d86ece95fb66\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.17f7d86ece95fb66\" value_size:618 lease:8128032086776975414 >> failure:<>"}
	{"level":"info","ts":"2024-09-23T10:22:32.827522Z","caller":"traceutil/trace.go:171","msg":"trace[1221588109] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"191.005673ms","start":"2024-09-23T10:22:32.636508Z","end":"2024-09-23T10:22:32.827514Z","steps":["trace[1221588109] 'process raft request'  (duration: 190.655353ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:22:32.950466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.636501Z","time spent":"313.930653ms","remote":"127.0.0.1:49090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":201,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:284 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:136 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" > >"}
	{"level":"warn","ts":"2024-09-23T10:22:33.131317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.346561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032086776975712 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" value_size:2820 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T10:22:33.131445Z","caller":"traceutil/trace.go:171","msg":"trace[1069954861] linearizableReadLoop","detail":"{readStateIndex:377; appliedIndex:376; }","duration":"181.441071ms","start":"2024-09-23T10:22:32.949991Z","end":"2024-09-23T10:22:33.131432Z","steps":["trace[1069954861] 'read index received'  (duration: 75.782049ms)","trace[1069954861] 'applied index is now lower than readState.Index'  (duration: 105.65779ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.131583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.530772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2024-09-23T10:22:33.131616Z","caller":"traceutil/trace.go:171","msg":"trace[2088191430] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:366; }","duration":"402.569222ms","start":"2024-09-23T10:22:32.729039Z","end":"2024-09-23T10:22:33.131608Z","steps":["trace[2088191430] 'agreement among raft nodes before linearized reading'  (duration: 402.426676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:22:33.131648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.729011Z","time spent":"402.631755ms","remote":"127.0.0.1:49284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"info","ts":"2024-09-23T10:22:33.131969Z","caller":"traceutil/trace.go:171","msg":"trace[485971237] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"288.992414ms","start":"2024-09-23T10:22:32.842964Z","end":"2024-09-23T10:22:33.131957Z","steps":["trace[485971237] 'process raft request'  (duration: 182.871523ms)","trace[485971237] 'compare'  (duration: 105.121142ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.132153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.499934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-23T10:22:33.132187Z","caller":"traceutil/trace.go:171","msg":"trace[517510634] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"289.537023ms","start":"2024-09-23T10:22:32.842643Z","end":"2024-09-23T10:22:33.132180Z","steps":["trace[517510634] 'agreement among raft nodes before linearized reading'  (duration: 289.463087ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.539907Z","caller":"traceutil/trace.go:171","msg":"trace[2144953017] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"108.868731ms","start":"2024-09-23T10:22:33.431009Z","end":"2024-09-23T10:22:33.539878Z","steps":["trace[2144953017] 'process raft request'  (duration: 104.859929ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.541630Z","caller":"traceutil/trace.go:171","msg":"trace[398091402] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"104.416004ms","start":"2024-09-23T10:22:33.437193Z","end":"2024-09-23T10:22:33.541609Z","steps":["trace[398091402] 'process raft request'  (duration: 104.009984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542060Z","caller":"traceutil/trace.go:171","msg":"trace[668743326] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"104.729221ms","start":"2024-09-23T10:22:33.437317Z","end":"2024-09-23T10:22:33.542046Z","steps":["trace[668743326] 'process raft request'  (duration: 103.952712ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542277Z","caller":"traceutil/trace.go:171","msg":"trace[1672766993] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"104.838258ms","start":"2024-09-23T10:22:33.437430Z","end":"2024-09-23T10:22:33.542268Z","steps":["trace[1672766993] 'process raft request'  (duration: 103.868629ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542412Z","caller":"traceutil/trace.go:171","msg":"trace[1767469839] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"103.483072ms","start":"2024-09-23T10:22:33.438922Z","end":"2024-09-23T10:22:33.542405Z","steps":["trace[1767469839] 'process raft request'  (duration: 102.407175ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.736052Z","caller":"traceutil/trace.go:171","msg":"trace[227628294] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"102.334143ms","start":"2024-09-23T10:22:33.633699Z","end":"2024-09-23T10:22:33.736033Z","steps":["trace[227628294] 'process raft request'  (duration: 13.990139ms)","trace[227628294] 'compare'  (duration: 85.643779ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:22:33.736225Z","caller":"traceutil/trace.go:171","msg":"trace[2102522964] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"101.939414ms","start":"2024-09-23T10:22:33.634278Z","end":"2024-09-23T10:22:33.736218Z","steps":["trace[2102522964] 'process raft request'  (duration: 99.195559ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.032263Z","caller":"traceutil/trace.go:171","msg":"trace[1847492038] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"100.284349ms","start":"2024-09-23T10:22:33.931958Z","end":"2024-09-23T10:22:34.032242Z","steps":["trace[1847492038] 'process raft request'  (duration: 99.986846ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.130120Z","caller":"traceutil/trace.go:171","msg":"trace[300160576] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"190.991092ms","start":"2024-09-23T10:22:33.939083Z","end":"2024-09-23T10:22:34.130074Z","steps":["trace[300160576] 'process raft request'  (duration: 108.050293ms)","trace[300160576] 'store kv pair into bolt db' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4078; } (duration: 77.321365ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:34.431549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.369404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:22:34.431682Z","caller":"traceutil/trace.go:171","msg":"trace[1877297112] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:429; }","duration":"100.50784ms","start":"2024-09-23T10:22:34.331159Z","end":"2024-09-23T10:22:34.431667Z","steps":["trace[1877297112] 'agreement among raft nodes before linearized reading'  (duration: 100.356061ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:21.645850Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-23T10:32:21.668993Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"22.717488ms","hash":1048422649,"current-db-size-bytes":6332416,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3301376,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T10:32:21.669036Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1048422649,"revision":1524,"compact-revision":-1}
	
	
	==> gcp-auth [595e24a79c3ccf249c4aaed9888b59fd920080ef1b7290f246cb0006fc71308a] <==
	2024/09/23 10:24:13 GCP Auth Webhook started!
	2024/09/23 10:25:14 Ready to marshal response ...
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:25:14 Ready to marshal response ...
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:25:14 Ready to marshal response ...
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:33:27 Ready to marshal response ...
	2024/09/23 10:33:27 Ready to write response ...
	2024/09/23 10:33:35 Ready to marshal response ...
	2024/09/23 10:33:35 Ready to write response ...
	2024/09/23 10:33:38 Ready to marshal response ...
	2024/09/23 10:33:38 Ready to write response ...
	2024/09/23 10:33:52 Ready to marshal response ...
	2024/09/23 10:33:52 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:22 Ready to marshal response ...
	2024/09/23 10:34:22 Ready to write response ...
	
	
	==> kernel <==
	 10:34:29 up 16 min,  0 users,  load average: 0.84, 0.37, 0.25
	Linux addons-445250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] <==
	I0923 10:32:24.636466       1 main.go:299] handling current node
	I0923 10:32:34.628967       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:34.629006       1 main.go:299] handling current node
	I0923 10:32:44.629599       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:44.629643       1 main.go:299] handling current node
	I0923 10:32:54.635421       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:32:54.635455       1 main.go:299] handling current node
	I0923 10:33:04.633731       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:04.633767       1 main.go:299] handling current node
	I0923 10:33:14.629815       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:14.629845       1 main.go:299] handling current node
	I0923 10:33:24.628986       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:24.629020       1 main.go:299] handling current node
	I0923 10:33:34.628950       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:34.628989       1 main.go:299] handling current node
	I0923 10:33:44.629584       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:44.629641       1 main.go:299] handling current node
	I0923 10:33:54.629882       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:33:54.629922       1 main.go:299] handling current node
	I0923 10:34:04.629083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:04.629122       1 main.go:299] handling current node
	I0923 10:34:14.629094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:14.629127       1 main.go:299] handling current node
	I0923 10:34:24.629820       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:24.629853       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] <==
	I0923 10:24:42.571331       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0923 10:24:46.576903       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.109.89.254:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.109.89.254:443/apis/metrics.k8s.io/v1beta1\": context deadline exceeded" logger="UnhandledError"
	W0923 10:24:46.576928       1 handler_proxy.go:99] no RequestInfo found in the context
	E0923 10:24:46.576989       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 10:24:46.587407       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:33:33.261205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:34.276041       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:38.712682       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:39.049386       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.47.123"}
	I0923 10:33:49.532462       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:34:08.670342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.670392       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.685107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685195       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738295       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.826492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.826532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:34:09.686230       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:34:09.826884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:34:09.841944       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	
	
	==> kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] <==
	E0923 10:34:09.687606       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:34:09.828152       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	E0923 10:34:09.843198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:10.661602       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:10.661653       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:11.037612       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:11.037658       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:11.421529       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:11.421578       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:12.573125       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:12.573170       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:13.061540       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:13.061588       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:13.349570       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:13.349609       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:16.984698       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:16.984731       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:17.675310       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:17.675349       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:34:19.257528       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:19.257574       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:23.263315       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="local-path-storage/local-path-provisioner-86d989889c" duration="10.174µs"
	W0923 10:34:26.637585       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:26.637628       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:27.907814       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/registry-66c9cd494c" duration="5.989µs"
	
	
	==> kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] <==
	I0923 10:22:34.431903       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:22:35.042477       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:22:35.042566       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:35.338576       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:22:35.338730       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:35.342534       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:35.342914       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:35.342944       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:35.344273       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:35.344364       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:35.344302       1 config.go:328] "Starting node config controller"
	I0923 10:22:35.344482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:35.344292       1 config.go:199] "Starting service config controller"
	I0923 10:22:35.344524       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:35.445049       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:35.445083       1 shared_informer.go:320] Caches are synced for node config
	I0923 10:22:35.445054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] <==
	E0923 10:22:23.044121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0923 10:22:23.044704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:23.044717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.044734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:23.044800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.045089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:23.045151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.983340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:23.983386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.986617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.986665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.010130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:24.010176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.047286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:22:24.047439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.182956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:24.183033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.191245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:24.191331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:22:24.442656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:34:25 addons-445250 kubelet[1645]: I0923 10:34:25.501976    1645 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="47c1de41c6dfaab29d622074409f6c2890796d31361e48a165d08c7b478eba3c"
	Sep 23 10:34:25 addons-445250 kubelet[1645]: I0923 10:34:25.615436    1645 scope.go:117] "RemoveContainer" containerID="2b926f5256049029037bea4f9b0a1fbc34bd5a1eb11b67a1499981cca3206503"
	Sep 23 10:34:25 addons-445250 kubelet[1645]: I0923 10:34:25.634980    1645 scope.go:117] "RemoveContainer" containerID="12bdbf504724416c8108e913a0bf9858c194dec1ec449ccee6905bc95642293e"
	Sep 23 10:34:25 addons-445250 kubelet[1645]: E0923 10:34:25.778583    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087665778324957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547012,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:34:25 addons-445250 kubelet[1645]: E0923 10:34:25.778618    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087665778324957,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:547012,},InodesUsed:&UInt64Value{Value:207,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.544428    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-gcp-creds\") pod \"41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418\" (UID: \"41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418\") "
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.544493    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dxqcf\" (UniqueName: \"kubernetes.io/projected/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-kube-api-access-dxqcf\") pod \"41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418\" (UID: \"41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418\") "
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.544497    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-gcp-creds" (OuterVolumeSpecName: "gcp-creds") pod "41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418" (UID: "41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418"). InnerVolumeSpecName "gcp-creds". PluginName "kubernetes.io/host-path", VolumeGidValue ""
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.544568    1645 reconciler_common.go:288] "Volume detached for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-gcp-creds\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.546249    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-kube-api-access-dxqcf" (OuterVolumeSpecName: "kube-api-access-dxqcf") pod "41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418" (UID: "41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418"). InnerVolumeSpecName "kube-api-access-dxqcf". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:27 addons-445250 kubelet[1645]: I0923 10:34:27.645641    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-dxqcf\" (UniqueName: \"kubernetes.io/projected/41ad7ce2-f9fa-4bf4-8cf0-974c9ef37418-kube-api-access-dxqcf\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.249699    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g5v8\" (UniqueName: \"kubernetes.io/projected/d7382df4-3be8-48d0-9dcb-8cb5cc78647c-kube-api-access-6g5v8\") pod \"d7382df4-3be8-48d0-9dcb-8cb5cc78647c\" (UID: \"d7382df4-3be8-48d0-9dcb-8cb5cc78647c\") "
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.249757    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxccv\" (UniqueName: \"kubernetes.io/projected/40d0085a-ea70-4052-ad07-a26bb7092539-kube-api-access-wxccv\") pod \"40d0085a-ea70-4052-ad07-a26bb7092539\" (UID: \"40d0085a-ea70-4052-ad07-a26bb7092539\") "
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.251813    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7382df4-3be8-48d0-9dcb-8cb5cc78647c-kube-api-access-6g5v8" (OuterVolumeSpecName: "kube-api-access-6g5v8") pod "d7382df4-3be8-48d0-9dcb-8cb5cc78647c" (UID: "d7382df4-3be8-48d0-9dcb-8cb5cc78647c"). InnerVolumeSpecName "kube-api-access-6g5v8". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.251944    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40d0085a-ea70-4052-ad07-a26bb7092539-kube-api-access-wxccv" (OuterVolumeSpecName: "kube-api-access-wxccv") pod "40d0085a-ea70-4052-ad07-a26bb7092539" (UID: "40d0085a-ea70-4052-ad07-a26bb7092539"). InnerVolumeSpecName "kube-api-access-wxccv". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.350604    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-6g5v8\" (UniqueName: \"kubernetes.io/projected/d7382df4-3be8-48d0-9dcb-8cb5cc78647c-kube-api-access-6g5v8\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.350650    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-wxccv\" (UniqueName: \"kubernetes.io/projected/40d0085a-ea70-4052-ad07-a26bb7092539-kube-api-access-wxccv\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.510870    1645 scope.go:117] "RemoveContainer" containerID="0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.525325    1645 scope.go:117] "RemoveContainer" containerID="0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: E0923 10:34:28.525747    1645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999\": container with ID starting with 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999 not found: ID does not exist" containerID="0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.525795    1645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999"} err="failed to get container status \"0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999\": rpc error: code = NotFound desc = could not find container \"0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999\": container with ID starting with 0e243d9eb20b0bc38de11d573ea4fb7eec20655ae380a20b24432f96edbf5999 not found: ID does not exist"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.525838    1645 scope.go:117] "RemoveContainer" containerID="76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.542389    1645 scope.go:117] "RemoveContainer" containerID="76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: E0923 10:34:28.542749    1645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb\": container with ID starting with 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb not found: ID does not exist" containerID="76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb"
	Sep 23 10:34:28 addons-445250 kubelet[1645]: I0923 10:34:28.542786    1645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb"} err="failed to get container status \"76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb\": rpc error: code = NotFound desc = could not find container \"76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb\": container with ID starting with 76287a05db6b97b6b8068094b3b4cd23046ca7b29584261ec8d1c41a41aa1acb not found: ID does not exist"
	
	
	==> storage-provisioner [66c2617c6cdee7295f19941c86a3a9fbb87fd2b16719e15685c22bcccfbae254] <==
	I0923 10:23:15.441142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:23:15.449522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:23:15.449568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:23:15.456173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:23:15.456300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4170979-0bd2-4164-95c1-443418c50fe4", APIVersion:"v1", ResourceVersion:"884", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82 became leader
	I0923 10:23:15.456350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	I0923 10:23:15.556572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-445250 -n addons-445250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-445250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox ingress-nginx-admission-create-8v7x6 ingress-nginx-admission-patch-4wv4b
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Registry]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-445250 describe pod busybox ingress-nginx-admission-create-8v7x6 ingress-nginx-admission-patch-4wv4b
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-445250 describe pod busybox ingress-nginx-admission-create-8v7x6 ingress-nginx-admission-patch-4wv4b: exit status 1 (73.165034ms)

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-445250/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:25:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvh9z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvh9z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                    From               Message
	  ----     ------     ----                   ----               -------
	  Normal   Scheduled  9m16s                  default-scheduler  Successfully assigned default/busybox to addons-445250
	  Normal   Pulling    7m45s (x4 over 9m16s)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     7m45s (x4 over 9m15s)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     7m45s (x4 over 9m15s)  kubelet            Error: ErrImagePull
	  Warning  Failed     7m30s (x6 over 9m15s)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    4m5s (x21 over 9m15s)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-8v7x6" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-4wv4b" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-445250 describe pod busybox ingress-nginx-admission-create-8v7x6 ingress-nginx-admission-patch-4wv4b: exit status 1
--- FAIL: TestAddons/parallel/Registry (72.90s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (156.95s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:205: (dbg) Run:  kubectl --context addons-445250 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:230: (dbg) Run:  kubectl --context addons-445250 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:243: (dbg) Run:  kubectl --context addons-445250 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [c574fe38-4c01-4add-ba0d-b74b6b21c297] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [c574fe38-4c01-4add-ba0d-b74b6b21c297] Running
addons_test.go:248: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 14.003892631s
I0923 10:33:53.060254   10562 kapi.go:150] Service nginx in namespace default found.
addons_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:260: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-445250 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.674281729s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:276: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:284: (dbg) Run:  kubectl --context addons-445250 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:289: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 ip
addons_test.go:295: (dbg) Run:  nslookup hello-john.test 192.168.49.2
addons_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:304: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable ingress-dns --alsologtostderr -v=1: (1.462771044s)
addons_test.go:309: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable ingress --alsologtostderr -v=1
addons_test.go:309: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable ingress --alsologtostderr -v=1: (7.610640101s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-445250
helpers_test.go:235: (dbg) docker inspect addons-445250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de",
	        "Created": "2024-09-23T10:22:07.858444399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:22:07.992183864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hostname",
	        "HostsPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hosts",
	        "LogPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de-json.log",
	        "Name": "/addons-445250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-445250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-445250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9-init/diff:/var/lib/docker/overlay2/7d643569ae4970466837c9a65113e736da4066b6ecef95c8dfd4e28343439fd4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-445250",
	                "Source": "/var/lib/docker/volumes/addons-445250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-445250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-445250",
	                "name.minikube.sigs.k8s.io": "addons-445250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11702f683be50ee88e7771ed6cf42c56a8b968ee9233079204792fc15e16ca3a",
	            "SandboxKey": "/var/run/docker/netns/11702f683be5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-445250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6e9e6c600c8a794f7091380417d6269c6bcfab6c9ff820d67e47faecc18d66e9",
	                    "EndpointID": "e2a135f221a1a3480c5eff902d6dc55c09d0804810f708c60a366ec74feb8c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-445250",
	                        "13e368cd79e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-445250 -n addons-445250
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 logs -n 25: (1.183573073s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-764506                                                                     | download-only-764506   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-662224                                                                     | download-only-662224   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | download-docker-581243                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-581243                                                                   | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-083835                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40991                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083835                                                                     | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-445250 --wait=true                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh curl -s                                                                   | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh cat                                                                       | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | /opt/local-path-provisioner/pvc-f2f3f271-6db1-4176-931b-e93dd714c1c9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-445250 ip                                                                            | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | -p addons-445250                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | -p addons-445250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-445250 ip                                                                            | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:46.722935   11967 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:46.723042   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723048   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:46.723052   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723211   11967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:21:46.723833   11967 out.go:352] Setting JSON to false
	I0923 10:21:46.724726   11967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":251,"bootTime":1727086656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:46.724818   11967 start.go:139] virtualization: kvm guest
	I0923 10:21:46.726917   11967 out.go:177] * [addons-445250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:46.728496   11967 notify.go:220] Checking for updates...
	I0923 10:21:46.728529   11967 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:46.730127   11967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:46.731529   11967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:21:46.733032   11967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:21:46.734520   11967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:46.735940   11967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:46.737437   11967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:46.757864   11967 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:21:46.757943   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.804617   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.795429084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.804761   11967 docker.go:318] overlay module found
	I0923 10:21:46.807023   11967 out.go:177] * Using the docker driver based on user configuration
	I0923 10:21:46.808457   11967 start.go:297] selected driver: docker
	I0923 10:21:46.808470   11967 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:46.808480   11967 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:46.809252   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.853138   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.844831844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.853280   11967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:46.853569   11967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:46.855475   11967 out.go:177] * Using Docker driver with root privileges
	I0923 10:21:46.856837   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:21:46.856896   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:21:46.856908   11967 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:46.856965   11967 start.go:340] cluster config:
	{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:46.858565   11967 out.go:177] * Starting "addons-445250" primary control-plane node in "addons-445250" cluster
	I0923 10:21:46.859951   11967 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 10:21:46.861523   11967 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:46.862889   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:46.862932   11967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:46.862943   11967 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:46.862994   11967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:46.863034   11967 preload.go:172] Found /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:46.863044   11967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:46.863345   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:21:46.863370   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json: {Name:mk54c5258400406bc02a0be01645830e04ed3533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:46.878981   11967 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:46.879106   11967 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:46.879123   11967 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:46.879127   11967 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:46.879134   11967 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:46.879141   11967 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:21:59.079658   11967 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:21:59.079699   11967 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:21:59.079749   11967 start.go:360] acquireMachinesLock for addons-445250: {Name:mk58626d6fa4f17f6f629476491054fee819afac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:59.079854   11967 start.go:364] duration metric: took 81.967µs to acquireMachinesLock for "addons-445250"
	I0923 10:21:59.079884   11967 start.go:93] Provisioning new machine with config: &{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:59.079961   11967 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:21:59.082680   11967 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:21:59.082908   11967 start.go:159] libmachine.API.Create for "addons-445250" (driver="docker")
	I0923 10:21:59.082939   11967 client.go:168] LocalClient.Create starting
	I0923 10:21:59.083053   11967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem
	I0923 10:21:59.283728   11967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem
	I0923 10:21:59.338041   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:21:59.353789   11967 cli_runner.go:211] docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:21:59.353863   11967 network_create.go:284] running [docker network inspect addons-445250] to gather additional debugging logs...
	I0923 10:21:59.353885   11967 cli_runner.go:164] Run: docker network inspect addons-445250
	W0923 10:21:59.368954   11967 cli_runner.go:211] docker network inspect addons-445250 returned with exit code 1
	I0923 10:21:59.368983   11967 network_create.go:287] error running [docker network inspect addons-445250]: docker network inspect addons-445250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-445250 not found
	I0923 10:21:59.368994   11967 network_create.go:289] output of [docker network inspect addons-445250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-445250 not found
	
	** /stderr **
	I0923 10:21:59.369064   11967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:59.384645   11967 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b467a0}
	I0923 10:21:59.384701   11967 network_create.go:124] attempt to create docker network addons-445250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:21:59.384762   11967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-445250 addons-445250
	I0923 10:21:59.445035   11967 network_create.go:108] docker network addons-445250 192.168.49.0/24 created
	I0923 10:21:59.445065   11967 kic.go:121] calculated static IP "192.168.49.2" for the "addons-445250" container
	I0923 10:21:59.445131   11967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:21:59.460629   11967 cli_runner.go:164] Run: docker volume create addons-445250 --label name.minikube.sigs.k8s.io=addons-445250 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:21:59.476907   11967 oci.go:103] Successfully created a docker volume addons-445250
	I0923 10:21:59.476979   11967 cli_runner.go:164] Run: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:22:03.434642   11967 cli_runner.go:217] Completed: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (3.957618145s)
	I0923 10:22:03.434674   11967 oci.go:107] Successfully prepared a docker volume addons-445250
	I0923 10:22:03.434699   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:03.434718   11967 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:22:03.434769   11967 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:22:07.800698   11967 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.365884505s)
	I0923 10:22:07.800727   11967 kic.go:203] duration metric: took 4.366005266s to extract preloaded images to volume ...
	W0923 10:22:07.800860   11967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:22:07.800985   11967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:22:07.843740   11967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-445250 --name addons-445250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-445250 --network addons-445250 --ip 192.168.49.2 --volume addons-445250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:22:08.145428   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Running}}
	I0923 10:22:08.163069   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.180280   11967 cli_runner.go:164] Run: docker exec addons-445250 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:22:08.223991   11967 oci.go:144] the created container "addons-445250" has a running status.
	I0923 10:22:08.224039   11967 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa...
	I0923 10:22:08.349744   11967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:22:08.370308   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.394245   11967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:22:08.394268   11967 kic_runner.go:114] Args: [docker exec --privileged addons-445250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:22:08.436001   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.455362   11967 machine.go:93] provisionDockerMachine start ...
	I0923 10:22:08.455457   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:08.480578   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:08.480844   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:08.480858   11967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:22:08.481650   11967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44746->127.0.0.1:32768: read: connection reset by peer
	I0923 10:22:11.613107   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.613148   11967 ubuntu.go:169] provisioning hostname "addons-445250"
	I0923 10:22:11.613220   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.632203   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.632375   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.632389   11967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-445250 && echo "addons-445250" | sudo tee /etc/hostname
	I0923 10:22:11.772148   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.772239   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.793347   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.793545   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.793571   11967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-445250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-445250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-445250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:11.921432   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:11.921466   11967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3772/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3772/.minikube}
	I0923 10:22:11.921533   11967 ubuntu.go:177] setting up certificates
	I0923 10:22:11.921552   11967 provision.go:84] configureAuth start
	I0923 10:22:11.921640   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:11.937581   11967 provision.go:143] copyHostCerts
	I0923 10:22:11.937653   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/key.pem (1679 bytes)
	I0923 10:22:11.937757   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/ca.pem (1082 bytes)
	I0923 10:22:11.937816   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/cert.pem (1123 bytes)
	I0923 10:22:11.937865   11967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem org=jenkins.addons-445250 san=[127.0.0.1 192.168.49.2 addons-445250 localhost minikube]
	I0923 10:22:12.190566   11967 provision.go:177] copyRemoteCerts
	I0923 10:22:12.190629   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:12.190662   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.207913   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.301604   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:22:12.323506   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:12.345626   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:22:12.366986   11967 provision.go:87] duration metric: took 445.417004ms to configureAuth
	I0923 10:22:12.367016   11967 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:22:12.367177   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:12.367273   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.384149   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:12.384351   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:12.384365   11967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:12.601161   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:12.601191   11967 machine.go:96] duration metric: took 4.145798692s to provisionDockerMachine
	I0923 10:22:12.601205   11967 client.go:171] duration metric: took 13.518254951s to LocalClient.Create
	I0923 10:22:12.601232   11967 start.go:167] duration metric: took 13.518321061s to libmachine.API.Create "addons-445250"
	I0923 10:22:12.601243   11967 start.go:293] postStartSetup for "addons-445250" (driver="docker")
	I0923 10:22:12.601256   11967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:12.601330   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:12.601386   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.617703   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.710189   11967 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:12.713341   11967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:22:12.713372   11967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:22:12.713380   11967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:22:12.713387   11967 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:22:12.713396   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/addons for local assets ...
	I0923 10:22:12.713453   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/files for local assets ...
	I0923 10:22:12.713475   11967 start.go:296] duration metric: took 112.225945ms for postStartSetup
	I0923 10:22:12.713792   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.730492   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:22:12.730768   11967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:22:12.730831   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.747370   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.837980   11967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:22:12.841809   11967 start.go:128] duration metric: took 13.761835585s to createHost
	I0923 10:22:12.841831   11967 start.go:83] releasing machines lock for "addons-445250", held for 13.76196327s
	I0923 10:22:12.841880   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.857765   11967 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:12.857812   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.857826   11967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:12.857890   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.875001   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.875855   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:13.035237   11967 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:13.039237   11967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:13.175392   11967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:22:13.179320   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.195856   11967 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:22:13.195931   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.221316   11967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:22:13.221364   11967 start.go:495] detecting cgroup driver to use...
	I0923 10:22:13.221399   11967 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:22:13.221447   11967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:13.235209   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:13.245258   11967 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:13.245304   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:13.257110   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:13.270190   11967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:13.345987   11967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:13.431095   11967 docker.go:233] disabling docker service ...
	I0923 10:22:13.431158   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:13.448504   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:13.459326   11967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:13.538609   11967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:13.627128   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:13.637297   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:13.651328   11967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:13.651409   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.660149   11967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:13.660207   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.668833   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.677566   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.686751   11967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:13.695283   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.704095   11967 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.718346   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.727226   11967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:13.734826   11967 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:13.734883   11967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:13.747287   11967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:13.755093   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:13.829252   11967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:14.158226   11967 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:14.158294   11967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:14.161542   11967 start.go:563] Will wait 60s for crictl version
	I0923 10:22:14.161588   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:22:14.164545   11967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:14.194967   11967 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 10:22:14.195073   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.228259   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.262832   11967 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 10:22:14.264297   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:22:14.279971   11967 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:14.283271   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.293146   11967 kubeadm.go:883] updating cluster {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:14.293287   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:14.293343   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.352262   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.352284   11967 crio.go:433] Images already preloaded, skipping extraction
	I0923 10:22:14.352323   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.382541   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.382561   11967 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:14.382568   11967 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 10:22:14.382655   11967 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-445250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:14.382713   11967 ssh_runner.go:195] Run: crio config
	I0923 10:22:14.424280   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:14.424300   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:14.424309   11967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:14.424330   11967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-445250 NodeName:addons-445250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:14.424465   11967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-445250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:14.424518   11967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:14.432810   11967 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:14.432882   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:14.440979   11967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 10:22:14.456846   11967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:14.473092   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 10:22:14.489063   11967 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:14.492280   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.502541   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:14.581826   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:14.594096   11967 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250 for IP: 192.168.49.2
	I0923 10:22:14.594120   11967 certs.go:194] generating shared ca certs ...
	I0923 10:22:14.594140   11967 certs.go:226] acquiring lock for ca certs: {Name:mkbb719d992584afad4bc806b595dfbc8bf85283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.594259   11967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key
	I0923 10:22:14.681658   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt ...
	I0923 10:22:14.681683   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt: {Name:mk1f9f53ba20e5a2662fcdac9037bc6a4a8fd1b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681837   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key ...
	I0923 10:22:14.681847   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key: {Name:mk52ffe2b2a53346768d26bc1f6d2740c4fc9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681914   11967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key
	I0923 10:22:14.764606   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt ...
	I0923 10:22:14.764633   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt: {Name:mk8f4a9df3471bb1b7cc77d68850cb5575be1691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764782   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key ...
	I0923 10:22:14.764793   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key: {Name:mk637c0032a7e0b43519628027243d2c0d2d6b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764855   11967 certs.go:256] generating profile certs ...
	I0923 10:22:14.764906   11967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key
	I0923 10:22:14.764920   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt with IP's: []
	I0923 10:22:15.005422   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt ...
	I0923 10:22:15.005450   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: {Name:mk4bd69aa7022da3f588d449215ad314ecdb2eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005608   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key ...
	I0923 10:22:15.005620   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key: {Name:mkae46f7c7acf2efdeeb48926276ca9bf1fec02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005682   11967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa
	I0923 10:22:15.005699   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:22:15.404464   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa ...
	I0923 10:22:15.404496   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa: {Name:mk8def3abfe8729e739e9892b8e2dfdfaa975e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404648   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa ...
	I0923 10:22:15.404661   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa: {Name:mkc1cfd8e1a6b6ba70edb50de4cc7a2de96fef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404730   11967 certs.go:381] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt
	I0923 10:22:15.404821   11967 certs.go:385] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key
	I0923 10:22:15.404875   11967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key
	I0923 10:22:15.404901   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt with IP's: []
	I0923 10:22:15.857985   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt ...
	I0923 10:22:15.858015   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt: {Name:mk3ebea646b11f719e3aafe05a2859ab48c62804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858201   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key ...
	I0923 10:22:15.858218   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key: {Name:mk16575e201f9fd127e621495ba0c5bc4e64a79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858432   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:15.858477   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem (1082 bytes)
	I0923 10:22:15.858514   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:15.858544   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem (1679 bytes)
	I0923 10:22:15.859128   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:15.880908   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:15.902039   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:15.923187   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 10:22:15.944338   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:15.965387   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:22:15.986458   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:16.007433   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:16.028442   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:16.050349   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:16.067334   11967 ssh_runner.go:195] Run: openssl version
	I0923 10:22:16.072904   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:16.081554   11967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084699   11967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084740   11967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.091121   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:16.099776   11967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:16.102849   11967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:16.102894   11967 kubeadm.go:392] StartCluster: {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:16.102966   11967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:16.103005   11967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:16.135153   11967 cri.go:89] found id: ""
	I0923 10:22:16.135208   11967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:16.143317   11967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:16.151131   11967 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:22:16.151185   11967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:16.158804   11967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:16.158823   11967 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:16.158860   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:16.166353   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:16.166422   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:16.174207   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:16.181623   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:16.181684   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:16.189018   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.196505   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:16.196565   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.204028   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:16.211652   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:16.211714   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:16.218868   11967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:22:16.253475   11967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:16.254017   11967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:16.269754   11967 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:22:16.269837   11967 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0923 10:22:16.269871   11967 kubeadm.go:310] OS: Linux
	I0923 10:22:16.269959   11967 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:22:16.270050   11967 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:22:16.270128   11967 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:22:16.270202   11967 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:22:16.270274   11967 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:22:16.270360   11967 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:22:16.270417   11967 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:22:16.270469   11967 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:22:16.270521   11967 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:22:16.318273   11967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:16.318402   11967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:16.318562   11967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:16.324445   11967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:16.327392   11967 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:16.327503   11967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:16.327598   11967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:16.461803   11967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:16.741266   11967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:16.849130   11967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:17.176671   11967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:17.429269   11967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:17.429471   11967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.596676   11967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:17.596789   11967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.788256   11967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:17.876354   11967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:18.471196   11967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:18.471297   11967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:18.730115   11967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:18.932151   11967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:19.024826   11967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:19.144008   11967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:19.259815   11967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:19.260334   11967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:19.262678   11967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:19.264869   11967 out.go:235]   - Booting up control plane ...
	I0923 10:22:19.265001   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:19.265096   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:19.265162   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:19.273358   11967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:19.278617   11967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:19.278696   11967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:19.355589   11967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:19.355691   11967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:19.857077   11967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.535465ms
	I0923 10:22:19.857205   11967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:24.859100   11967 kubeadm.go:310] [api-check] The API server is healthy after 5.002044714s
	I0923 10:22:24.871015   11967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:24.881606   11967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:24.899928   11967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:24.900246   11967 kubeadm.go:310] [mark-control-plane] Marking the node addons-445250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:24.910024   11967 kubeadm.go:310] [bootstrap-token] Using token: tzcr7c.qy08ihjpsu8woy77
	I0923 10:22:24.911692   11967 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:24.911836   11967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:24.914938   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:24.920963   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:24.923728   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:24.926249   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:24.929913   11967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:25.266487   11967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:25.686706   11967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:26.267074   11967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:26.268161   11967 kubeadm.go:310] 
	I0923 10:22:26.268232   11967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:26.268246   11967 kubeadm.go:310] 
	I0923 10:22:26.268333   11967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:26.268349   11967 kubeadm.go:310] 
	I0923 10:22:26.268371   11967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:26.268443   11967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:26.268498   11967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:26.268503   11967 kubeadm.go:310] 
	I0923 10:22:26.268548   11967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:26.268555   11967 kubeadm.go:310] 
	I0923 10:22:26.268595   11967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:26.268602   11967 kubeadm.go:310] 
	I0923 10:22:26.268680   11967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:26.268775   11967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:26.268850   11967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:26.268858   11967 kubeadm.go:310] 
	I0923 10:22:26.268962   11967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:26.269039   11967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:26.269050   11967 kubeadm.go:310] 
	I0923 10:22:26.269125   11967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269229   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e \
	I0923 10:22:26.269251   11967 kubeadm.go:310] 	--control-plane 
	I0923 10:22:26.269256   11967 kubeadm.go:310] 
	I0923 10:22:26.269371   11967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:26.269381   11967 kubeadm.go:310] 
	I0923 10:22:26.269476   11967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269658   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e 
	I0923 10:22:26.271764   11967 kubeadm.go:310] W0923 10:22:16.250858    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272087   11967 kubeadm.go:310] W0923 10:22:16.251517    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272386   11967 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0923 10:22:26.272539   11967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:26.272573   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:26.272586   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:26.274792   11967 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:22:26.276289   11967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:22:26.279952   11967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:22:26.279967   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:22:26.296902   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:22:26.488183   11967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-445250 minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-445250 minikube.k8s.io/primary=true
	I0923 10:22:26.495506   11967 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:26.569029   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.069137   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.570004   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.069287   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.569763   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.069617   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.569788   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.069539   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.569838   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.639983   11967 kubeadm.go:1113] duration metric: took 4.1517431s to wait for elevateKubeSystemPrivileges
	I0923 10:22:30.640014   11967 kubeadm.go:394] duration metric: took 14.537124377s to StartCluster
	I0923 10:22:30.640032   11967 settings.go:142] acquiring lock: {Name:mk872f1d275188f797c9a12c8098849cd4e5cab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640127   11967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:22:30.640473   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/kubeconfig: {Name:mk157cbe356b4d3a0ed9cd6c04752524343ac891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640639   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:30.640656   11967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:30.640716   11967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:30.640836   11967 addons.go:69] Setting yakd=true in profile "addons-445250"
	I0923 10:22:30.640848   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.640862   11967 addons.go:234] Setting addon yakd=true in "addons-445250"
	I0923 10:22:30.640856   11967 addons.go:69] Setting ingress-dns=true in profile "addons-445250"
	I0923 10:22:30.640883   11967 addons.go:234] Setting addon ingress-dns=true in "addons-445250"
	I0923 10:22:30.640892   11967 addons.go:69] Setting gcp-auth=true in profile "addons-445250"
	I0923 10:22:30.640895   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640889   11967 addons.go:69] Setting default-storageclass=true in profile "addons-445250"
	I0923 10:22:30.640909   11967 mustload.go:65] Loading cluster: addons-445250
	I0923 10:22:30.640900   11967 addons.go:69] Setting cloud-spanner=true in profile "addons-445250"
	I0923 10:22:30.640917   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-445250"
	I0923 10:22:30.640906   11967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-445250"
	I0923 10:22:30.640934   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640949   11967 addons.go:69] Setting registry=true in profile "addons-445250"
	I0923 10:22:30.640966   11967 addons.go:234] Setting addon registry=true in "addons-445250"
	I0923 10:22:30.640971   11967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:30.640995   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640999   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641053   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.641257   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641280   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641366   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641379   11967 addons.go:69] Setting inspektor-gadget=true in profile "addons-445250"
	I0923 10:22:30.641392   11967 addons.go:234] Setting addon inspektor-gadget=true in "addons-445250"
	I0923 10:22:30.641415   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641426   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641435   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641870   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642158   11967 addons.go:69] Setting ingress=true in profile "addons-445250"
	I0923 10:22:30.642181   11967 addons.go:234] Setting addon ingress=true in "addons-445250"
	I0923 10:22:30.642196   11967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-445250"
	I0923 10:22:30.642213   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642215   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-445250"
	I0923 10:22:30.642394   11967 addons.go:69] Setting volcano=true in profile "addons-445250"
	I0923 10:22:30.642518   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642544   11967 addons.go:234] Setting addon volcano=true in "addons-445250"
	I0923 10:22:30.642576   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642679   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642705   11967 addons.go:69] Setting volumesnapshots=true in profile "addons-445250"
	I0923 10:22:30.642720   11967 addons.go:234] Setting addon volumesnapshots=true in "addons-445250"
	I0923 10:22:30.642741   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642878   11967 addons.go:69] Setting metrics-server=true in profile "addons-445250"
	I0923 10:22:30.642900   11967 addons.go:234] Setting addon metrics-server=true in "addons-445250"
	I0923 10:22:30.642925   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641369   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.640934   11967 addons.go:234] Setting addon cloud-spanner=true in "addons-445250"
	I0923 10:22:30.642998   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643036   11967 addons.go:69] Setting storage-provisioner=true in profile "addons-445250"
	I0923 10:22:30.643061   11967 addons.go:234] Setting addon storage-provisioner=true in "addons-445250"
	I0923 10:22:30.643085   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643519   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.648041   11967 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:30.648388   11967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-445250"
	I0923 10:22:30.648409   11967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-445250"
	I0923 10:22:30.648446   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.648949   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650296   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650451   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:30.665893   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666046   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666054   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.667975   11967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:30.669562   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:30.669584   11967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:30.669644   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.685327   11967 addons.go:234] Setting addon default-storageclass=true in "addons-445250"
	I0923 10:22:30.685375   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.685862   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.686064   11967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:30.687257   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.687775   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:30.687828   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:30.687898   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.698483   11967 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:30.700153   11967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:30.702006   11967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:30.702025   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:30.702098   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.711515   11967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:30.713056   11967 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:30.713078   11967 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:30.713145   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.714581   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:30.717305   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:30.718944   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:30.720619   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:30.722483   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:30.722584   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:30.724093   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:30.724118   11967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:30.724177   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.724581   11967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:30.726222   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:30.726536   11967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:30.726554   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:30.726622   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.730002   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:30.732162   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:30.733954   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:30.733974   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:30.734033   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.741467   11967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-445250"
	I0923 10:22:30.741546   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.742079   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.744624   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.747096   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:30.749664   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.757242   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.758538   11967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:30.759791   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:30.759812   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:30.759870   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.760711   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:30.760738   11967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:30.760797   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.757266   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.770117   11967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:30.770190   11967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0923 10:22:30.772541   11967 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:30.774529   11967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:30.774556   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:30.774621   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.775602   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:30.775624   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:30.775674   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.787036   11967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:30.787436   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.789020   11967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:30.789037   11967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:30.789092   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.790579   11967 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:30.792232   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:30.792253   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:30.792310   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.799243   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.802426   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.804649   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.807350   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.811528   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815735   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815938   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.817779   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.818866   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.821325   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	W0923 10:22:30.833869   11967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:22:30.833911   11967 retry.go:31] will retry after 251.502566ms: ssh: handshake failed: EOF
	I0923 10:22:30.930840   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:31.038430   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:31.130020   11967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:31.130106   11967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:31.148662   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:31.148713   11967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:31.247685   11967 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:31.247721   11967 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:31.329027   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:31.329056   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:31.329202   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:31.329335   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:31.329470   11967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.329484   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:31.339924   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:31.339949   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:31.342429   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:31.345422   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:31.346167   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:31.346186   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:31.437032   11967 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:31.437063   11967 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:31.440413   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:31.445193   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:31.527011   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:31.527097   11967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:31.546462   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.627051   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:31.627133   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:31.627825   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:31.627867   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:31.630751   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:31.630811   11967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:31.635963   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:31.636203   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:31.636238   11967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:31.639336   11967 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:31.639357   11967 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:31.826246   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:31.826334   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:31.826554   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:31.826604   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:31.827508   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:31.827551   11967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:31.930152   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:31.930233   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:31.946738   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:31.946849   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:32.031166   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:32.031251   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:32.038777   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:32.046798   11967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.115916979s)
	I0923 10:22:32.046977   11967 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:32.046913   11967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.008451247s)
	I0923 10:22:32.048977   11967 node_ready.go:35] waiting up to 6m0s for node "addons-445250" to be "Ready" ...
	I0923 10:22:32.127879   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:32.239522   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:32.239604   11967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:32.329601   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:32.329684   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:32.343924   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:32.344005   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:32.445635   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.11638761s)
	I0923 10:22:32.633750   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:32.633833   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:32.638879   11967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:32.638905   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:32.645923   11967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:32.645948   11967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:32.649688   11967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-445250" context rescaled to 1 replicas
	I0923 10:22:32.949369   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:32.949444   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:33.033520   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.704142355s)
	I0923 10:22:33.227536   11967 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.227561   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:33.239664   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:33.427133   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:33.427213   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:33.532321   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.727155   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:33.727249   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:34.046873   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:34.046911   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:34.149845   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:34.233755   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:35.227770   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.885253813s)
	I0923 10:22:35.227973   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.882480333s)
	I0923 10:22:36.338824   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.898369608s)
	I0923 10:22:36.338860   11967 addons.go:475] Verifying addon ingress=true in "addons-445250"
	I0923 10:22:36.339007   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.792498678s)
	I0923 10:22:36.339038   11967 addons.go:475] Verifying addon registry=true in "addons-445250"
	I0923 10:22:36.339090   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.703092374s)
	I0923 10:22:36.338954   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.893724471s)
	I0923 10:22:36.339140   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.300273199s)
	I0923 10:22:36.339204   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.211246073s)
	I0923 10:22:36.340459   11967 addons.go:475] Verifying addon metrics-server=true in "addons-445250"
	I0923 10:22:36.340949   11967 out.go:177] * Verifying registry addon...
	I0923 10:22:36.340964   11967 out.go:177] * Verifying ingress addon...
	I0923 10:22:36.342063   11967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-445250 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:36.343731   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:36.343939   11967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:36.348954   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:36.348976   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.350483   11967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:36.350503   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.554792   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:36.933962   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.937073   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.041012   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.801300714s)
	W0923 10:22:37.041067   11967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041095   11967 retry.go:31] will retry after 370.601258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041141   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.508711885s)
	I0923 10:22:37.291210   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.057397179s)
	I0923 10:22:37.291243   11967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:37.293123   11967 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:37.295283   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:37.330959   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:37.330988   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.411870   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:37.431984   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.432434   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.799504   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.846652   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.847318   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.894861   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:37.894922   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:37.910904   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.037420   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:38.132707   11967 addons.go:234] Setting addon gcp-auth=true in "addons-445250"
	I0923 10:22:38.132762   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:38.133409   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:38.167105   11967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:38.167160   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:38.184042   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.329969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.348850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.349827   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.798829   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.847385   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.847868   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.052033   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:39.298764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.347022   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.347523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.828069   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.847719   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.848042   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.039490   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.627570811s)
	I0923 10:22:40.039578   11967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.872440211s)
	I0923 10:22:40.042091   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:40.043723   11967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:40.045225   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:40.045257   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:40.064228   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:40.064253   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:40.082222   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.082246   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:40.136907   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.329111   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.347816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.348314   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.743497   11967 addons.go:475] Verifying addon gcp-auth=true in "addons-445250"
	I0923 10:22:40.745653   11967 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:40.747983   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:40.750552   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:40.750569   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:40.851357   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.851626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.852043   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.052135   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:41.252005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.298689   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.347279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.347602   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.798093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.846555   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.847144   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.250929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.298461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.346929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.347234   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.750863   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.798245   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.846557   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.847000   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.251127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.355618   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.356062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.356298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.552536   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:43.751077   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.798734   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.847103   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.847581   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.251644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.298309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.346771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.347034   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.750721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.798594   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.847101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.847535   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.299044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.347547   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.348189   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.750705   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.798232   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.846841   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.847120   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.052259   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:46.250899   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.298397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.347430   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.798438   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.846533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.846987   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.250492   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.298879   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.347127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.347819   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.751310   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.847226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.847773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.251448   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.298949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.347317   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.551694   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:48.752137   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.798472   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.846972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.847400   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.251341   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.298972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.347471   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.347951   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.750703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.799078   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.847429   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.847812   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.251408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.298942   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.347421   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.347893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.552081   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:50.750626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.798173   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.847276   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.848032   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.251477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.298961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.347458   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.347867   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.750664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.798274   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.846535   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.847185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.250749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.298515   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.346957   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.347409   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.552403   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:52.751135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.798654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.847072   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.847476   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.251020   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.298711   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.347029   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.347626   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.751732   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.798241   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.846461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.846962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.250842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.298271   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.346627   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.346949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.750779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.798482   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.846682   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.847175   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.052413   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:55.251076   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.298677   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.347003   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.347743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.798538   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.847067   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.847484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.251150   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.298943   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.347453   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.347896   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.751095   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.798596   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.846745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.847179   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.250839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.298505   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.347074   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.347506   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.551579   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:57.751148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.798529   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.846924   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.847369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.251170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.298665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.347156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.347556   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.751622   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.798291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.846556   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.847159   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.251703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.298260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.346762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.552250   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:59.750656   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.798196   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.846497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.846841   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.298537   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.347146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.347462   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.798720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.846991   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.847390   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.251092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.298651   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.346885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.347266   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.552532   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:01.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.798797   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.847134   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.847636   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.251414   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.299046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.346776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.346976   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.751210   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.798665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.847041   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.847588   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.298846   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.347477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.347937   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.751580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.797877   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.847450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.847870   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.052155   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:04.250974   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.298589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.346910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.751530   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.799567   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.846770   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.847184   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.251007   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.298445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.347135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.347527   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.751388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.799093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.847646   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.848031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.052367   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:06.250840   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.298387   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.346761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.347238   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.751720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.798318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.846779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.847219   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.251318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.298911   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.347408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.347769   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.751469   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.798992   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.847606   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.847853   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.251235   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.298906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.347450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.348057   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.552198   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:08.750869   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.798365   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.846408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.846765   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.251760   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.298434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.346956   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.347369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.750692   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.798046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.847526   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.848062   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.250707   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.298206   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.346577   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.346962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.552617   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:10.750936   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.798405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.846773   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.847081   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.250576   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.298011   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.347813   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.750864   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.798382   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.846687   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.847174   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.250954   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.298486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.347499   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.552672   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:12.751565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.798263   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.846609   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.847288   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.250649   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.298224   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.346581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.347009   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.750948   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.798498   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.846756   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.847196   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.250875   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.298430   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.346812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.347181   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.757534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.831755   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.863298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.863307   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:23:14.863337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.054652   11967 node_ready.go:49] node "addons-445250" has status "Ready":"True"
	I0923 10:23:15.054684   11967 node_ready.go:38] duration metric: took 43.005633575s for node "addons-445250" to be "Ready" ...
	I0923 10:23:15.054698   11967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:15.138931   11967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:15.251452   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.301612   11967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:23:15.301637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.427434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.428141   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.753427   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.855252   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.855518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.855536   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.143869   11967 pod_ready.go:93] pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.143890   11967 pod_ready.go:82] duration metric: took 1.004925199s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.143908   11967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147771   11967 pod_ready.go:93] pod "etcd-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.147797   11967 pod_ready.go:82] duration metric: took 3.880973ms for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147813   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151340   11967 pod_ready.go:93] pod "kube-apiserver-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.151360   11967 pod_ready.go:82] duration metric: took 3.538721ms for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151379   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154908   11967 pod_ready.go:93] pod "kube-controller-manager-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.154925   11967 pod_ready.go:82] duration metric: took 3.540171ms for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154937   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.251122   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.252541   11967 pod_ready.go:93] pod "kube-proxy-wkmtk" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.252560   11967 pod_ready.go:82] duration metric: took 97.616289ms for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.252569   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.298885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.346935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.347232   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.652929   11967 pod_ready.go:93] pod "kube-scheduler-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.652958   11967 pod_ready.go:82] duration metric: took 400.380255ms for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.652971   11967 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.751305   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.799551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.847949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.848185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.251997   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.299328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.347771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.348037   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.751574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.799610   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.848015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.848650   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.250930   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.299312   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.347764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.348433   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.659062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:18.752418   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.799730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.847222   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.847395   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.251503   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.299737   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.347618   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.799536   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.847668   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.847798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.251502   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.299967   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.347574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.347946   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.752027   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.799582   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.847709   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.848055   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.159098   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.252132   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.299964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.346779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.347020   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.755745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.858762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.859520   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.860194   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.251409   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.300044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.346882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.347235   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.751664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.852777   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.853039   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.853226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.251068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.299520   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.347578   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.347931   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.658413   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:23.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.851935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.852480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.852589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.251460   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.299593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.347663   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.348012   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.752139   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.829769   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.848533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.848714   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.250787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.299026   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.347280   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.347450   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.751684   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.852481   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.852917   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.853012   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.158564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:26.251164   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.299637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.346953   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.751177   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.800249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.900081   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.900480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.251580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.299779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.352497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.353041   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.751475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.853114   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.853317   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.853731   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.158745   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:28.251214   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.298730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.347152   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.347334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.751028   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.851629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.852256   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.852278   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.251249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.299788   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.347140   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.347661   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.752405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.800154   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.846882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.847331   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.251406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.300215   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.347641   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.658131   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:30.751454   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.800486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.847123   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.847735   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.252032   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.300096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.352870   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.353371   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.751766   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.804056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.847133   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.847758   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.251744   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.299223   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.347388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.347653   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.751592   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.799179   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.847018   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.847414   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.159363   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:33.251561   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.298927   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.347523   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.751641   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.798959   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.847153   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.847494   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.251511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.329090   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.346926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.347136   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.852511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.853626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.853680   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.252182   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.299378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.347693   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.348033   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.658931   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:35.752279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.800092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.852945   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.853579   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.300230   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.400564   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.400859   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.750934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.799444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.848021   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.848290   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.251588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.300049   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.352506   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.352773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.750964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.799890   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.852354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.852606   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.158705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:38.251162   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.299581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.348004   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.348356   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.751115   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.799410   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.848082   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.848190   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.251584   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.347098   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.347599   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.751816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.799402   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.847842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.848902   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.251286   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.299565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.347972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.348284   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.659317   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:40.751324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.829307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.847208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.847905   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.251435   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.328874   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.347851   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.348180   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.751765   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.799242   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.847826   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.848244   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.252250   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.299996   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.348350   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.348560   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.751101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.828445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.847541   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.158526   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:43.251605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.331532   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.347791   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.348403   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.848295   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.850906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.850993   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.251344   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.299676   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.348302   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.348644   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.751195   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.799580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.847459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.847814   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.251761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.299534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.347837   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.348259   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.658244   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:45.751109   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.799378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.847844   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.848484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.250862   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.299309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.347588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.347881   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.752096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.830819   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.849324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.849449   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.251489   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.329696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.352328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.352675   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.659565   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:47.751839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.799363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.847629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.848160   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.251322   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.299644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.348437   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.349011   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.751971   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.799208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.847311   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.847677   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.252345   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.353336   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.354113   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.354290   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.751696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.798850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.847044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.847294   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.159412   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:50.251315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.300359   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.347407   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.347852   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.752148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.853086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.853794   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.853937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.251808   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.299349   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.347605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.347803   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.770445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.800059   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.847168   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.847523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.252094   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.299496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.347921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.348258   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.658499   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:52.751149   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.799391   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.847917   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.251084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.299459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.348165   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.349134   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.751749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.799085   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.847146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.847698   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.251525   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.299814   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.347087   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.658586   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:54.751738   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.798920   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.847135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.847497   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.251534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.299916   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.347315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.347570   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.751726   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.799056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.847243   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.847517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.250904   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.329860   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.347928   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.348157   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.659564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:56.751715   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.798895   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.848713   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.849087   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.327397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.330171   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.347623   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.349031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.752514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.831070   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.849260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.929421   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.251507   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.329086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.348239   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.349299   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.659625   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:58.751107   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.828912   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.848131   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.848674   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.251980   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.329647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.347593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.348472   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.751659   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.799518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.847937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.848242   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.251754   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.299551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.348376   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.348776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.751910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.799228   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.847852   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.848386   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.159636   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:01.251545   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.300654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.347444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.347798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.751291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.799969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.847062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.847151   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.250921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.299432   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.347701   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.751456   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.799637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.846847   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.847345   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.251056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.299408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.349455   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.349496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.658044   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:03.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.800475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.847815   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.848013   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.251337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.347301   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.347740   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.751934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.828237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.847307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.847974   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.252170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.328300   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.347284   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.347600   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.658958   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:05.752071   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.853170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.853743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.853986   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.252000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.328730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.347793   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:06.348249   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.751665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.828961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.849117   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.849787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.251616   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.300647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.347543   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.348597   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.771812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.876237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.876559   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.877562   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.159319   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:08.251653   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.299807   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.348721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.348927   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:08.752006   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.799289   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.847514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.847770   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.251104   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.299398   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.347380   11967 kapi.go:107] duration metric: took 1m33.003646242s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:24:09.347767   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.751748   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.800319   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.847334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.251156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.299664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.348059   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.658634   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:10.750897   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.799121   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.847887   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.251008   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.299420   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.348466   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.750925   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.251825   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.299403   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.347748   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.751282   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.800000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.847468   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.159267   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:13.251700   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.299065   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.347829   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.752005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.799406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.853893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.253633   11967 kapi.go:107] duration metric: took 1m33.505659378s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:14.257404   11967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-445250 cluster.
	I0923 10:24:14.258882   11967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:14.260323   11967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:14.299938   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.348717   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.799563   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.847849   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.329354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.347994   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.658992   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:15.799926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.847969   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.299363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.348302   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.799654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.848799   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.299696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.348435   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.659051   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:17.799970   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.848268   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.300125   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.400393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.799588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.300200   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.348989   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.799189   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.847633   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.166062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:20.330843   11967 kapi.go:107] duration metric: took 1m43.035557511s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:20.348554   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.848824   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.348354   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.848082   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.348802   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.659419   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:22.847751   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.348517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.848949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.347848   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.848694   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:25.158725   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:25.348583   11967 kapi.go:107] duration metric: took 1m49.004639978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:25.350870   11967 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0923 10:24:25.353079   11967 addons.go:510] duration metric: took 1m54.712359706s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner metrics-server yakd inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0923 10:24:27.658306   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:29.658584   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:32.158410   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:34.657759   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:36.658121   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:38.658705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:40.659320   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:41.658677   11967 pod_ready.go:93] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.658710   11967 pod_ready.go:82] duration metric: took 1m25.005729374s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.658725   11967 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663462   11967 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.663484   11967 pod_ready.go:82] duration metric: took 4.751466ms for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663503   11967 pod_ready.go:39] duration metric: took 1m26.60878964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:24:41.663521   11967 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:24:41.663567   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:41.663611   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:41.696491   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:41.696517   11967 cri.go:89] found id: ""
	I0923 10:24:41.696526   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:41.696575   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.699787   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:41.699845   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:41.732611   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:41.732632   11967 cri.go:89] found id: ""
	I0923 10:24:41.732641   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:41.732680   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.736045   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:41.736113   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:41.768329   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:41.768360   11967 cri.go:89] found id: ""
	I0923 10:24:41.768370   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:41.768426   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.771643   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:41.771702   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:41.805603   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:41.805627   11967 cri.go:89] found id: ""
	I0923 10:24:41.805637   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:41.805686   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.808896   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:41.808968   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:41.843211   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:41.843234   11967 cri.go:89] found id: ""
	I0923 10:24:41.843242   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:41.843293   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.846569   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:41.846631   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:41.878951   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:41.878969   11967 cri.go:89] found id: ""
	I0923 10:24:41.878977   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:41.879015   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.882160   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:41.882216   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:41.913249   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:41.913273   11967 cri.go:89] found id: ""
	I0923 10:24:41.913281   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:41.913337   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.916358   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:41.916384   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:41.962291   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962472   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:41.962607   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962764   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.000201   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:42.000236   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:42.033282   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:42.033307   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:42.074054   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:42.074089   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:42.107707   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:42.107734   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:42.144872   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:42.144926   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:42.199993   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:42.200024   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:42.234245   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:42.234274   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:42.246004   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:42.246038   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:42.353925   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:42.353954   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:42.444039   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:42.444069   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:42.488688   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:42.488720   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:42.565082   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565110   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:42.565165   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:42.565173   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565180   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:42.565191   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565197   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.565201   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565206   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:52.566001   11967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:24:52.579983   11967 api_server.go:72] duration metric: took 2m21.939291421s to wait for apiserver process to appear ...
	I0923 10:24:52.580014   11967 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:24:52.580048   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:52.580103   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:52.613694   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.613720   11967 cri.go:89] found id: ""
	I0923 10:24:52.613729   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:52.613775   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.617041   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:52.617099   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:52.649762   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:52.649781   11967 cri.go:89] found id: ""
	I0923 10:24:52.649788   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:52.649852   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.653130   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:52.653186   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:52.685749   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:52.685769   11967 cri.go:89] found id: ""
	I0923 10:24:52.685775   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:52.685813   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.688875   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:52.688931   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:52.721693   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:52.721716   11967 cri.go:89] found id: ""
	I0923 10:24:52.721723   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:52.721772   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.725081   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:52.725136   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:52.759437   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:52.759464   11967 cri.go:89] found id: ""
	I0923 10:24:52.759474   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:52.759530   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.762872   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:52.762937   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:52.797876   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:52.797893   11967 cri.go:89] found id: ""
	I0923 10:24:52.797900   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:52.797940   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.801151   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:52.801201   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:52.833315   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:52.833339   11967 cri.go:89] found id: ""
	I0923 10:24:52.833346   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:52.833387   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.836655   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:52.836681   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:52.927959   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:52.927988   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.970219   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:52.970246   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:53.005352   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:53.005388   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:53.043256   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:53.043284   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:53.097302   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:53.097340   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:53.173928   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:53.173959   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:53.214820   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:53.214848   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:53.226459   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:53.226486   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:53.269173   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:53.269204   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:53.302182   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:53.302257   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:53.338936   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:53.338965   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:53.384315   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384503   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.384632   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384787   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422192   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422221   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:53.422272   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:53.422279   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422286   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.422294   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422303   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422308   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422314   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:03.423825   11967 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:25:03.428133   11967 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:25:03.428969   11967 api_server.go:141] control plane version: v1.31.1
	I0923 10:25:03.428992   11967 api_server.go:131] duration metric: took 10.848971435s to wait for apiserver health ...
	I0923 10:25:03.429000   11967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:25:03.429020   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:25:03.429067   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:25:03.463555   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:03.463573   11967 cri.go:89] found id: ""
	I0923 10:25:03.463582   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:25:03.463622   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.466867   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:25:03.466923   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:25:03.498838   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:03.498862   11967 cri.go:89] found id: ""
	I0923 10:25:03.498870   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:25:03.498916   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.502169   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:25:03.502224   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:25:03.535181   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:03.535202   11967 cri.go:89] found id: ""
	I0923 10:25:03.535211   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:25:03.535260   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.538506   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:25:03.538568   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:25:03.571929   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:03.571954   11967 cri.go:89] found id: ""
	I0923 10:25:03.571963   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:25:03.572007   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.575352   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:25:03.575421   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:25:03.608263   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:03.608286   11967 cri.go:89] found id: ""
	I0923 10:25:03.608296   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:25:03.608353   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.611725   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:25:03.611781   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:25:03.643940   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.643974   11967 cri.go:89] found id: ""
	I0923 10:25:03.643985   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:25:03.644031   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.647205   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:25:03.647259   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:25:03.680120   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:03.680145   11967 cri.go:89] found id: ""
	I0923 10:25:03.680155   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:25:03.680197   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.683474   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:25:03.683500   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:25:03.783529   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:25:03.783558   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.838870   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:25:03.838909   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:25:03.879312   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:25:03.879343   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:25:03.925363   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925562   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:03.925696   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925851   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:03.966109   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:25:03.966148   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:25:03.978653   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:25:03.978691   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:04.012260   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:25:04.012287   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:04.049729   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:25:04.049759   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:04.082626   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:25:04.082662   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:04.117339   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:25:04.117364   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:25:04.188147   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:25:04.188192   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:04.230982   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:25:04.231014   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:04.275512   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275542   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:25:04.275603   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:25:04.275611   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275621   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:04.275632   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275639   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:04.275644   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275655   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:14.287581   11967 system_pods.go:59] 18 kube-system pods found
	I0923 10:25:14.287615   11967 system_pods.go:61] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.287621   11967 system_pods.go:61] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.287624   11967 system_pods.go:61] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.287628   11967 system_pods.go:61] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.287631   11967 system_pods.go:61] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.287634   11967 system_pods.go:61] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.287638   11967 system_pods.go:61] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.287641   11967 system_pods.go:61] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.287646   11967 system_pods.go:61] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.287649   11967 system_pods.go:61] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.287652   11967 system_pods.go:61] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.287656   11967 system_pods.go:61] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.287661   11967 system_pods.go:61] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.287666   11967 system_pods.go:61] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.287672   11967 system_pods.go:61] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.287675   11967 system_pods.go:61] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.287681   11967 system_pods.go:61] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.287685   11967 system_pods.go:61] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.287693   11967 system_pods.go:74] duration metric: took 10.858688236s to wait for pod list to return data ...
	I0923 10:25:14.287702   11967 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:25:14.289991   11967 default_sa.go:45] found service account: "default"
	I0923 10:25:14.290010   11967 default_sa.go:55] duration metric: took 2.299912ms for default service account to be created ...
	I0923 10:25:14.290018   11967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:25:14.298150   11967 system_pods.go:86] 18 kube-system pods found
	I0923 10:25:14.298176   11967 system_pods.go:89] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.298181   11967 system_pods.go:89] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.298185   11967 system_pods.go:89] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.298188   11967 system_pods.go:89] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.298192   11967 system_pods.go:89] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.298196   11967 system_pods.go:89] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.298200   11967 system_pods.go:89] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.298205   11967 system_pods.go:89] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.298208   11967 system_pods.go:89] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.298212   11967 system_pods.go:89] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.298218   11967 system_pods.go:89] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.298222   11967 system_pods.go:89] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.298227   11967 system_pods.go:89] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.298230   11967 system_pods.go:89] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.298236   11967 system_pods.go:89] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.298239   11967 system_pods.go:89] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.298244   11967 system_pods.go:89] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.298247   11967 system_pods.go:89] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.298253   11967 system_pods.go:126] duration metric: took 8.230518ms to wait for k8s-apps to be running ...
	I0923 10:25:14.298262   11967 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:25:14.298303   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:25:14.309069   11967 system_svc.go:56] duration metric: took 10.799947ms WaitForService to wait for kubelet
	I0923 10:25:14.309093   11967 kubeadm.go:582] duration metric: took 2m43.668407459s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:25:14.309111   11967 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:25:14.312018   11967 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:25:14.312045   11967 node_conditions.go:123] node cpu capacity is 8
	I0923 10:25:14.312058   11967 node_conditions.go:105] duration metric: took 2.941824ms to run NodePressure ...
	I0923 10:25:14.312068   11967 start.go:241] waiting for startup goroutines ...
	I0923 10:25:14.312077   11967 start.go:246] waiting for cluster config update ...
	I0923 10:25:14.312094   11967 start.go:255] writing updated cluster config ...
	I0923 10:25:14.312343   11967 ssh_runner.go:195] Run: rm -f paused
	I0923 10:25:14.359947   11967 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:25:14.362510   11967 out.go:177] * Done! kubectl is now configured to use "addons-445250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.688483902Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=562d87af-1e78-4c8f-8db8-aa21e383b556 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.689005572Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=562d87af-1e78-4c8f-8db8-aa21e383b556 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.689755483Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=e3e89635-258f-4b3a-a568-751a4ff692b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.690355344Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:a82eba7887a40ecae558433f34225b2611dc77f982ce05b1ddb9b282b780fc86],Size_:4944818,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=e3e89635-258f-4b3a-a568-751a4ff692b9 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.691085432Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-cz95t/hello-world-app" id=ab7332e8-346f-48fd-83c4-2511dfc72d6a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.691169155Z" level=warning msg="Allowed annotations are specified for workload []"
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.707212480Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/4fbb60f94bb0ac489421d77d2329df23ec230f00ed66a8021c51d3341966ed47/merged/etc/passwd: no such file or directory"
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.707250068Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/4fbb60f94bb0ac489421d77d2329df23ec230f00ed66a8021c51d3341966ed47/merged/etc/group: no such file or directory"
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.741295535Z" level=info msg="Created container ddf43cced473fd4c27d2775ae4113446ed575699f852ed3b327f228cbd5e0974: default/hello-world-app-55bf9c44b4-cz95t/hello-world-app" id=ab7332e8-346f-48fd-83c4-2511dfc72d6a name=/runtime.v1.RuntimeService/CreateContainer
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.741855373Z" level=info msg="Starting container: ddf43cced473fd4c27d2775ae4113446ed575699f852ed3b327f228cbd5e0974" id=e9580985-ab25-4085-b863-2a073e5c0145 name=/runtime.v1.RuntimeService/StartContainer
	Sep 23 10:36:07 addons-445250 crio[1027]: time="2024-09-23 10:36:07.747324459Z" level=info msg="Started container" PID=11718 containerID=ddf43cced473fd4c27d2775ae4113446ed575699f852ed3b327f228cbd5e0974 description=default/hello-world-app-55bf9c44b4-cz95t/hello-world-app id=e9580985-ab25-4085-b863-2a073e5c0145 name=/runtime.v1.RuntimeService/StartContainer sandboxID=cacffc01918eac597e6a11b17dd8c54bd6c39a013e97558570075789ee49be57
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.262387325Z" level=warning msg="Stopping container 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b with stop signal timed out: timeout reached after 2 seconds waiting for container process to exit" id=21c4c882-d0e2-4d6d-a36b-738bac02e4b2 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:36:09 addons-445250 conmon[6226]: conmon 4694d204eb1eaedd3a26 <ninfo>: container 6238 exited with status 137
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.392650722Z" level=info msg="Stopped container 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b: ingress-nginx/ingress-nginx-controller-bc57996ff-p4lgm/controller" id=21c4c882-d0e2-4d6d-a36b-738bac02e4b2 name=/runtime.v1.RuntimeService/StopContainer
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.393194750Z" level=info msg="Stopping pod sandbox: b5b8e48a5b762afae728e39eff182c447f6bdb089a41b5b89cbfdc24c3db0fb6" id=71c50a4f-9811-4ef8-aa25-1c0c6a4244db name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.396165615Z" level=info msg="Restoring iptables rules: *nat\n:KUBE-HP-JH5HFPCV6DIVZSQY - [0:0]\n:KUBE-HP-YR5BH7NWHISXU5A4 - [0:0]\n:KUBE-HOSTPORTS - [0:0]\n-X KUBE-HP-YR5BH7NWHISXU5A4\n-X KUBE-HP-JH5HFPCV6DIVZSQY\nCOMMIT\n"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.397602861Z" level=info msg="Closing host port tcp:80"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.397644016Z" level=info msg="Closing host port tcp:443"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.398989401Z" level=info msg="Host port tcp:80 does not have an open socket"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.399006227Z" level=info msg="Host port tcp:443 does not have an open socket"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.399138303Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-bc57996ff-p4lgm Namespace:ingress-nginx ID:b5b8e48a5b762afae728e39eff182c447f6bdb089a41b5b89cbfdc24c3db0fb6 UID:0501f316-a471-4550-ae04-f97444d65783 NetNS:/var/run/netns/668fe546-a02e-426d-b0b3-2aff5df58951 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.399251542Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-bc57996ff-p4lgm from CNI network \"kindnet\" (type=ptp)"
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.439028881Z" level=info msg="Stopped pod sandbox: b5b8e48a5b762afae728e39eff182c447f6bdb089a41b5b89cbfdc24c3db0fb6" id=71c50a4f-9811-4ef8-aa25-1c0c6a4244db name=/runtime.v1.RuntimeService/StopPodSandbox
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.730334433Z" level=info msg="Removing container: 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b" id=27d5665e-1e68-4628-85d6-f287bf6675eb name=/runtime.v1.RuntimeService/RemoveContainer
	Sep 23 10:36:09 addons-445250 crio[1027]: time="2024-09-23 10:36:09.743942203Z" level=info msg="Removed container 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b: ingress-nginx/ingress-nginx-controller-bc57996ff-p4lgm/controller" id=27d5665e-1e68-4628-85d6-f287bf6675eb name=/runtime.v1.RuntimeService/RemoveContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddf43cced473f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        6 seconds ago       Running             hello-world-app           0                   cacffc01918ea       hello-world-app-55bf9c44b4-cz95t
	9b9d147b1d7d7       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                              2 minutes ago       Running             nginx                     0                   c86cb59ddb3ca       nginx
	f43878fce15a7       ce263a8653f9cdabdabaf36ae064b3e52b5240e6fac90663ad3b8f3a9bcef242                                                             11 minutes ago      Exited              patch                     3                   fe27de295d179       ingress-nginx-admission-patch-4wv4b
	595e24a79c3cc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 12 minutes ago      Running             gcp-auth                  0                   269c70f2ed966       gcp-auth-89d5ffd79-wh69l
	d5868858343b4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:1b792367d0e1350ee869b15f851d9e4de17db10f33fadaef628db3e6457aa012   12 minutes ago      Exited              create                    0                   c3d692cdaaff9       ingress-nginx-admission-create-8v7x6
	26fbe31bfc2e3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a        12 minutes ago      Running             metrics-server            0                   060b6c8c02d4c       metrics-server-84c5f94fbc-7csnr
	1ebaed16470de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                             12 minutes ago      Running             coredns                   0                   8b47c72a2e89f       coredns-7c65d6cfc9-fx58w
	66c2617c6cdee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             12 minutes ago      Running             storage-provisioner       0                   ca64b60aaf77d       storage-provisioner
	60d69acfd0786       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                             13 minutes ago      Running             kube-proxy                0                   8b3d1fd790d7d       kube-proxy-wkmtk
	3fc705a9a7747       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                             13 minutes ago      Running             kindnet-cni               0                   16dd7a97e2486       kindnet-dzbp5
	5a7d4dfeab76c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                             13 minutes ago      Running             etcd                      0                   d78357fa957f5       etcd-addons-445250
	3fc6d875aa953       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                             13 minutes ago      Running             kube-controller-manager   0                   b238baa295476       kube-controller-manager-addons-445250
	5e1692605ef5b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                             13 minutes ago      Running             kube-scheduler            0                   1912f3295ca7d       kube-scheduler-addons-445250
	8b87d8d2ee711       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                             13 minutes ago      Running             kube-apiserver            0                   f275d2a0ce43d       kube-apiserver-addons-445250
	
	
	==> coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] <==
	[INFO] 10.244.0.17:51021 - 2201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091016s
	[INFO] 10.244.0.17:48133 - 44271 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049343s
	[INFO] 10.244.0.17:48133 - 55785 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088427s
	[INFO] 10.244.0.17:49831 - 11625 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004641559s
	[INFO] 10.244.0.17:49831 - 53357 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.008874643s
	[INFO] 10.244.0.17:47951 - 29897 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004484598s
	[INFO] 10.244.0.17:47951 - 12748 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.01442901s
	[INFO] 10.244.0.17:48028 - 15319 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004123886s
	[INFO] 10.244.0.17:48028 - 211 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004165972s
	[INFO] 10.244.0.17:47195 - 44952 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070798s
	[INFO] 10.244.0.17:47195 - 64917 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141683s
	[INFO] 10.244.0.19:37440 - 47006 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160757s
	[INFO] 10.244.0.19:51770 - 28058 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235131s
	[INFO] 10.244.0.19:37999 - 57631 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117212s
	[INFO] 10.244.0.19:60851 - 28099 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164334s
	[INFO] 10.244.0.19:60473 - 52842 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127623s
	[INFO] 10.244.0.19:60093 - 46732 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183998s
	[INFO] 10.244.0.19:59180 - 21854 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005303324s
	[INFO] 10.244.0.19:53723 - 13226 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006472921s
	[INFO] 10.244.0.19:57517 - 53934 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004844258s
	[INFO] 10.244.0.19:37603 - 62628 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007796574s
	[INFO] 10.244.0.19:52499 - 62644 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004780066s
	[INFO] 10.244.0.19:43363 - 37803 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005512487s
	[INFO] 10.244.0.19:50641 - 54574 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.000695895s
	[INFO] 10.244.0.19:42118 - 61953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.000813877s
	
	
	==> describe nodes <==
	Name:               addons-445250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-445250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-445250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-445250
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-445250
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:36:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:35:00 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:35:00 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:35:00 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:35:00 +0000   Mon, 23 Sep 2024 10:23:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-445250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 98cd57bf5c0b47f391b0c0e0a30c5e14
	  System UUID:                64a901d1-6ec3-40d1-a503-55d7681a31ba
	  Boot ID:                    7fc2d313-9727-4ab1-967f-13a3c84ada15
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         11m
	  default                     hello-world-app-55bf9c44b4-cz95t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         11s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m36s
	  gcp-auth                    gcp-auth-89d5ffd79-wh69l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 coredns-7c65d6cfc9-fx58w                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     13m
	  kube-system                 etcd-addons-445250                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         13m
	  kube-system                 kindnet-dzbp5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      13m
	  kube-system                 kube-apiserver-addons-445250             250m (3%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-controller-manager-addons-445250    200m (2%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-proxy-wkmtk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 kube-scheduler-addons-445250             100m (1%)     0 (0%)      0 (0%)           0 (0%)         13m
	  kube-system                 metrics-server-84c5f94fbc-7csnr          100m (1%)     0 (0%)      200Mi (0%)       0 (0%)         13m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (11%)  100m (1%)
	  memory             420Mi (1%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 13m   kube-proxy       
	  Normal   Starting                 13m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 13m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  13m   kubelet          Node addons-445250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m   kubelet          Node addons-445250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m   kubelet          Node addons-445250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m   node-controller  Node addons-445250 event: Registered Node addons-445250 in Controller
	  Normal   NodeReady                13m   kubelet          Node addons-445250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003589] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001035] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000753] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001022] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000710] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000747] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.635766] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.213677] kauditd_printk_skb: 46 callbacks suppressed
	[Sep23 10:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +1.023987] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +2.019762] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[Sep23 10:34] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +8.191064] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[ +16.126232] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[ +33.276298] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	
	
	==> etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] <==
	{"level":"warn","ts":"2024-09-23T10:22:32.950276Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.636458Z","time spent":"313.791137ms","remote":"127.0.0.1:48942","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":689,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.17f7d86ece95fb66\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.17f7d86ece95fb66\" value_size:618 lease:8128032086776975414 >> failure:<>"}
	{"level":"info","ts":"2024-09-23T10:22:32.827522Z","caller":"traceutil/trace.go:171","msg":"trace[1221588109] transaction","detail":"{read_only:false; response_revision:364; number_of_response:1; }","duration":"191.005673ms","start":"2024-09-23T10:22:32.636508Z","end":"2024-09-23T10:22:32.827514Z","steps":["trace[1221588109] 'process raft request'  (duration: 190.655353ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:22:32.950466Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.636501Z","time spent":"313.930653ms","remote":"127.0.0.1:49090","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":201,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" mod_revision:284 > success:<request_put:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" value_size:136 >> failure:<request_range:<key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" > >"}
	{"level":"warn","ts":"2024-09-23T10:22:33.131317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.346561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032086776975712 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" value_size:2820 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T10:22:33.131445Z","caller":"traceutil/trace.go:171","msg":"trace[1069954861] linearizableReadLoop","detail":"{readStateIndex:377; appliedIndex:376; }","duration":"181.441071ms","start":"2024-09-23T10:22:32.949991Z","end":"2024-09-23T10:22:33.131432Z","steps":["trace[1069954861] 'read index received'  (duration: 75.782049ms)","trace[1069954861] 'applied index is now lower than readState.Index'  (duration: 105.65779ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.131583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.530772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2024-09-23T10:22:33.131616Z","caller":"traceutil/trace.go:171","msg":"trace[2088191430] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:366; }","duration":"402.569222ms","start":"2024-09-23T10:22:32.729039Z","end":"2024-09-23T10:22:33.131608Z","steps":["trace[2088191430] 'agreement among raft nodes before linearized reading'  (duration: 402.426676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:22:33.131648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.729011Z","time spent":"402.631755ms","remote":"127.0.0.1:49284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"info","ts":"2024-09-23T10:22:33.131969Z","caller":"traceutil/trace.go:171","msg":"trace[485971237] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"288.992414ms","start":"2024-09-23T10:22:32.842964Z","end":"2024-09-23T10:22:33.131957Z","steps":["trace[485971237] 'process raft request'  (duration: 182.871523ms)","trace[485971237] 'compare'  (duration: 105.121142ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.132153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.499934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-23T10:22:33.132187Z","caller":"traceutil/trace.go:171","msg":"trace[517510634] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"289.537023ms","start":"2024-09-23T10:22:32.842643Z","end":"2024-09-23T10:22:33.132180Z","steps":["trace[517510634] 'agreement among raft nodes before linearized reading'  (duration: 289.463087ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.539907Z","caller":"traceutil/trace.go:171","msg":"trace[2144953017] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"108.868731ms","start":"2024-09-23T10:22:33.431009Z","end":"2024-09-23T10:22:33.539878Z","steps":["trace[2144953017] 'process raft request'  (duration: 104.859929ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.541630Z","caller":"traceutil/trace.go:171","msg":"trace[398091402] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"104.416004ms","start":"2024-09-23T10:22:33.437193Z","end":"2024-09-23T10:22:33.541609Z","steps":["trace[398091402] 'process raft request'  (duration: 104.009984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542060Z","caller":"traceutil/trace.go:171","msg":"trace[668743326] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"104.729221ms","start":"2024-09-23T10:22:33.437317Z","end":"2024-09-23T10:22:33.542046Z","steps":["trace[668743326] 'process raft request'  (duration: 103.952712ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542277Z","caller":"traceutil/trace.go:171","msg":"trace[1672766993] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"104.838258ms","start":"2024-09-23T10:22:33.437430Z","end":"2024-09-23T10:22:33.542268Z","steps":["trace[1672766993] 'process raft request'  (duration: 103.868629ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542412Z","caller":"traceutil/trace.go:171","msg":"trace[1767469839] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"103.483072ms","start":"2024-09-23T10:22:33.438922Z","end":"2024-09-23T10:22:33.542405Z","steps":["trace[1767469839] 'process raft request'  (duration: 102.407175ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.736052Z","caller":"traceutil/trace.go:171","msg":"trace[227628294] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"102.334143ms","start":"2024-09-23T10:22:33.633699Z","end":"2024-09-23T10:22:33.736033Z","steps":["trace[227628294] 'process raft request'  (duration: 13.990139ms)","trace[227628294] 'compare'  (duration: 85.643779ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:22:33.736225Z","caller":"traceutil/trace.go:171","msg":"trace[2102522964] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"101.939414ms","start":"2024-09-23T10:22:33.634278Z","end":"2024-09-23T10:22:33.736218Z","steps":["trace[2102522964] 'process raft request'  (duration: 99.195559ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.032263Z","caller":"traceutil/trace.go:171","msg":"trace[1847492038] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"100.284349ms","start":"2024-09-23T10:22:33.931958Z","end":"2024-09-23T10:22:34.032242Z","steps":["trace[1847492038] 'process raft request'  (duration: 99.986846ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.130120Z","caller":"traceutil/trace.go:171","msg":"trace[300160576] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"190.991092ms","start":"2024-09-23T10:22:33.939083Z","end":"2024-09-23T10:22:34.130074Z","steps":["trace[300160576] 'process raft request'  (duration: 108.050293ms)","trace[300160576] 'store kv pair into bolt db' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4078; } (duration: 77.321365ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:34.431549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.369404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:22:34.431682Z","caller":"traceutil/trace.go:171","msg":"trace[1877297112] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:429; }","duration":"100.50784ms","start":"2024-09-23T10:22:34.331159Z","end":"2024-09-23T10:22:34.431667Z","steps":["trace[1877297112] 'agreement among raft nodes before linearized reading'  (duration: 100.356061ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:21.645850Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-23T10:32:21.668993Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"22.717488ms","hash":1048422649,"current-db-size-bytes":6332416,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3301376,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T10:32:21.669036Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1048422649,"revision":1524,"compact-revision":-1}
	
	
	==> gcp-auth [595e24a79c3ccf249c4aaed9888b59fd920080ef1b7290f246cb0006fc71308a] <==
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:25:14 Ready to marshal response ...
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:33:27 Ready to marshal response ...
	2024/09/23 10:33:27 Ready to write response ...
	2024/09/23 10:33:35 Ready to marshal response ...
	2024/09/23 10:33:35 Ready to write response ...
	2024/09/23 10:33:38 Ready to marshal response ...
	2024/09/23 10:33:38 Ready to write response ...
	2024/09/23 10:33:52 Ready to marshal response ...
	2024/09/23 10:33:52 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:22 Ready to marshal response ...
	2024/09/23 10:34:22 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:36:03 Ready to marshal response ...
	2024/09/23 10:36:03 Ready to write response ...
	
	
	==> kernel <==
	 10:36:14 up 18 min,  0 users,  load average: 0.44, 0.34, 0.26
	Linux addons-445250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] <==
	I0923 10:34:04.629122       1 main.go:299] handling current node
	I0923 10:34:14.629094       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:14.629127       1 main.go:299] handling current node
	I0923 10:34:24.629820       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:24.629853       1 main.go:299] handling current node
	I0923 10:34:34.629645       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:34.629679       1 main.go:299] handling current node
	I0923 10:34:44.630083       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:44.630117       1 main.go:299] handling current node
	I0923 10:34:54.629748       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:34:54.629810       1 main.go:299] handling current node
	I0923 10:35:04.629823       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:04.629864       1 main.go:299] handling current node
	I0923 10:35:14.636766       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:14.636823       1 main.go:299] handling current node
	I0923 10:35:24.633578       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:24.633616       1 main.go:299] handling current node
	I0923 10:35:34.629136       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:34.629171       1 main.go:299] handling current node
	I0923 10:35:44.636472       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:44.636511       1 main.go:299] handling current node
	I0923 10:35:54.633583       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:35:54.633622       1 main.go:299] handling current node
	I0923 10:36:04.629767       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:36:04.629806       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] <==
	E0923 10:24:46.576989       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 10:24:46.587407       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:33:33.261205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:34.276041       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:38.712682       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:39.049386       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.47.123"}
	I0923 10:33:49.532462       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:34:08.670342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.670392       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.685107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685195       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738295       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.826492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.826532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:34:09.686230       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:34:09.826884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:34:09.841944       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0923 10:34:38.707943       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:34:42.655531       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.246.242"}
	I0923 10:36:04.066372       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.16.78"}
	
	
	==> kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] <==
	W0923 10:34:51.554753       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:34:51.554795       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:34:54.131420       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="headlamp/headlamp-7b5c95b59d" duration="8.538µs"
	I0923 10:35:00.083662       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-445250"
	I0923 10:35:04.235498       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="headlamp"
	I0923 10:35:10.709596       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	W0923 10:35:14.162689       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:14.162738       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:27.239309       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:27.239346       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:32.999622       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:32.999663       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:37.368488       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:37.368529       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:35:45.661608       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:35:45.661654       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:36:03.933612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="71.127094ms"
	I0923 10:36:03.939422       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.754974ms"
	I0923 10:36:03.939499       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="38.251µs"
	I0923 10:36:03.941028       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.215µs"
	I0923 10:36:06.243318       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I0923 10:36:06.245055       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-bc57996ff" duration="7.253µs"
	I0923 10:36:06.247221       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I0923 10:36:08.740134       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="5.911125ms"
	I0923 10:36:08.740219       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="45.407µs"
	
	
	==> kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] <==
	I0923 10:22:34.431903       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:22:35.042477       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:22:35.042566       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:35.338576       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:22:35.338730       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:35.342534       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:35.342914       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:35.342944       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:35.344273       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:35.344364       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:35.344302       1 config.go:328] "Starting node config controller"
	I0923 10:22:35.344482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:35.344292       1 config.go:199] "Starting service config controller"
	I0923 10:22:35.344524       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:35.445049       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:35.445083       1 shared_informer.go:320] Caches are synced for node config
	I0923 10:22:35.445054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] <==
	E0923 10:22:23.044121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0923 10:22:23.044704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:23.044717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.044734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:23.044800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.045089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:23.045151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.983340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:23.983386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.986617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.986665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.010130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:24.010176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.047286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:22:24.047439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.182956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:24.183033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.191245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:24.191331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:22:24.442656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:36:04 addons-445250 kubelet[1645]: E0923 10:36:04.660868    1645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ErrImagePull: \"unable to retrieve auth token: invalid username/password: unauthorized: authentication failed\"" pod="default/busybox" podUID="cf5ff0cb-1670-40c0-b132-16e835022e57"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.036995    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5rvhw\" (UniqueName: \"kubernetes.io/projected/2eb91201-ae53-4248-b0dc-bc754dc7f77c-kube-api-access-5rvhw\") pod \"2eb91201-ae53-4248-b0dc-bc754dc7f77c\" (UID: \"2eb91201-ae53-4248-b0dc-bc754dc7f77c\") "
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.038794    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2eb91201-ae53-4248-b0dc-bc754dc7f77c-kube-api-access-5rvhw" (OuterVolumeSpecName: "kube-api-access-5rvhw") pod "2eb91201-ae53-4248-b0dc-bc754dc7f77c" (UID: "2eb91201-ae53-4248-b0dc-bc754dc7f77c"). InnerVolumeSpecName "kube-api-access-5rvhw". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.138072    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-5rvhw\" (UniqueName: \"kubernetes.io/projected/2eb91201-ae53-4248-b0dc-bc754dc7f77c-kube-api-access-5rvhw\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.719155    1645 scope.go:117] "RemoveContainer" containerID="d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.734530    1645 scope.go:117] "RemoveContainer" containerID="d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: E0923 10:36:05.735013    1645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87\": container with ID starting with d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87 not found: ID does not exist" containerID="d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: I0923 10:36:05.735068    1645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87"} err="failed to get container status \"d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87\": rpc error: code = NotFound desc = could not find container \"d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87\": container with ID starting with d86adcd030248e084305def7f2ef0d5d93a54175696a838721bec28fd52b9d87 not found: ID does not exist"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: E0923 10:36:05.847918    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087765847662458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:561240,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:05 addons-445250 kubelet[1645]: E0923 10:36:05.847954    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087765847662458,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:561240,},InodesUsed:&UInt64Value{Value:213,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:36:07 addons-445250 kubelet[1645]: I0923 10:36:07.543156    1645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2a6e25d7-9230-4a24-bdce-529e3b38e673" path="/var/lib/kubelet/pods/2a6e25d7-9230-4a24-bdce-529e3b38e673/volumes"
	Sep 23 10:36:07 addons-445250 kubelet[1645]: I0923 10:36:07.543655    1645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2eb91201-ae53-4248-b0dc-bc754dc7f77c" path="/var/lib/kubelet/pods/2eb91201-ae53-4248-b0dc-bc754dc7f77c/volumes"
	Sep 23 10:36:07 addons-445250 kubelet[1645]: I0923 10:36:07.544073    1645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e5b88a4a-e11f-403b-b317-ed1e49bd8a32" path="/var/lib/kubelet/pods/e5b88a4a-e11f-403b-b317-ed1e49bd8a32/volumes"
	Sep 23 10:36:08 addons-445250 kubelet[1645]: I0923 10:36:08.734360    1645 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/hello-world-app-55bf9c44b4-cz95t" podStartSLOduration=2.613968815 podStartE2EDuration="5.734338737s" podCreationTimestamp="2024-09-23 10:36:03 +0000 UTC" firstStartedPulling="2024-09-23 10:36:04.568852731 +0000 UTC m=+819.104066152" lastFinishedPulling="2024-09-23 10:36:07.68922265 +0000 UTC m=+822.224436074" observedRunningTime="2024-09-23 10:36:08.734008852 +0000 UTC m=+823.269222290" watchObservedRunningTime="2024-09-23 10:36:08.734338737 +0000 UTC m=+823.269552176"
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.567245    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2ns2\" (UniqueName: \"kubernetes.io/projected/0501f316-a471-4550-ae04-f97444d65783-kube-api-access-l2ns2\") pod \"0501f316-a471-4550-ae04-f97444d65783\" (UID: \"0501f316-a471-4550-ae04-f97444d65783\") "
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.567293    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0501f316-a471-4550-ae04-f97444d65783-webhook-cert\") pod \"0501f316-a471-4550-ae04-f97444d65783\" (UID: \"0501f316-a471-4550-ae04-f97444d65783\") "
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.569123    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0501f316-a471-4550-ae04-f97444d65783-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "0501f316-a471-4550-ae04-f97444d65783" (UID: "0501f316-a471-4550-ae04-f97444d65783"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.569125    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0501f316-a471-4550-ae04-f97444d65783-kube-api-access-l2ns2" (OuterVolumeSpecName: "kube-api-access-l2ns2") pod "0501f316-a471-4550-ae04-f97444d65783" (UID: "0501f316-a471-4550-ae04-f97444d65783"). InnerVolumeSpecName "kube-api-access-l2ns2". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.668447    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-l2ns2\" (UniqueName: \"kubernetes.io/projected/0501f316-a471-4550-ae04-f97444d65783-kube-api-access-l2ns2\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.668490    1645 reconciler_common.go:288] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0501f316-a471-4550-ae04-f97444d65783-webhook-cert\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.729362    1645 scope.go:117] "RemoveContainer" containerID="4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b"
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.744225    1645 scope.go:117] "RemoveContainer" containerID="4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b"
	Sep 23 10:36:09 addons-445250 kubelet[1645]: E0923 10:36:09.744642    1645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b\": container with ID starting with 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b not found: ID does not exist" containerID="4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b"
	Sep 23 10:36:09 addons-445250 kubelet[1645]: I0923 10:36:09.744684    1645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b"} err="failed to get container status \"4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b\": rpc error: code = NotFound desc = could not find container \"4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b\": container with ID starting with 4694d204eb1eaedd3a26bda9a8ce1f260b7f12c4260a2d70e2bce4baa5218b0b not found: ID does not exist"
	Sep 23 10:36:11 addons-445250 kubelet[1645]: I0923 10:36:11.543406    1645 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0501f316-a471-4550-ae04-f97444d65783" path="/var/lib/kubelet/pods/0501f316-a471-4550-ae04-f97444d65783/volumes"
	
	
	==> storage-provisioner [66c2617c6cdee7295f19941c86a3a9fbb87fd2b16719e15685c22bcccfbae254] <==
	I0923 10:23:15.441142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:23:15.449522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:23:15.449568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:23:15.456173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:23:15.456300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4170979-0bd2-4164-95c1-443418c50fe4", APIVersion:"v1", ResourceVersion:"884", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82 became leader
	I0923 10:23:15.456350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	I0923 10:23:15.556572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-445250 -n addons-445250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-445250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-445250 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-445250 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-445250/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:25:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvh9z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvh9z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  11m                  default-scheduler  Successfully assigned default/busybox to addons-445250
	  Normal   Pulling    9m30s (x4 over 11m)  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     9m30s (x4 over 11m)  kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     9m30s (x4 over 11m)  kubelet            Error: ErrImagePull
	  Warning  Failed     9m15s (x6 over 11m)  kubelet            Error: ImagePullBackOff
	  Normal   BackOff    58s (x42 over 11m)   kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (156.95s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (340.75s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
I0923 10:33:17.177992   10562 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
addons_test.go:405: metrics-server stabilized in 2.593002ms
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
I0923 10:33:17.181791   10562 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0923 10:33:17.181810   10562 kapi.go:107] duration metric: took 3.837999ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
helpers_test.go:344: "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
addons_test.go:407: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.002824647s
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (71.816468ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 10m52.253565564s

                                                
                                                
** /stderr **
I0923 10:33:22.256238   10562 retry.go:31] will retry after 3.568636195s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (65.582769ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 10m55.889527587s

                                                
                                                
** /stderr **
I0923 10:33:25.891344   10562 retry.go:31] will retry after 5.651522881s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (64.227039ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 11m1.606264928s

                                                
                                                
** /stderr **
I0923 10:33:31.607978   10562 retry.go:31] will retry after 6.89625221s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (66.571662ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 11m8.568951321s

                                                
                                                
** /stderr **
I0923 10:33:38.571017   10562 retry.go:31] will retry after 10.735960704s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (64.169862ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 11m19.369764714s

                                                
                                                
** /stderr **
I0923 10:33:49.371564   10562 retry.go:31] will retry after 11.096579669s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (60.963283ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 11m30.528294872s

                                                
                                                
** /stderr **
I0923 10:34:00.529915   10562 retry.go:31] will retry after 32.918064463s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (70.264295ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 12m3.517471091s

                                                
                                                
** /stderr **
I0923 10:34:33.519543   10562 retry.go:31] will retry after 24.428645134s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (58.311917ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 12m28.006069609s

                                                
                                                
** /stderr **
I0923 10:34:58.007827   10562 retry.go:31] will retry after 26.055902114s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (61.257685ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 12m54.123772731s

                                                
                                                
** /stderr **
I0923 10:35:24.125969   10562 retry.go:31] will retry after 56.786357819s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (59.245082ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 13m50.970579408s

                                                
                                                
** /stderr **
I0923 10:36:20.972491   10562 retry.go:31] will retry after 1m15.214904801s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (62.559661ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 15m6.248929949s

                                                
                                                
** /stderr **
I0923 10:37:36.250920   10562 retry.go:31] will retry after 1m19.024263424s: exit status 1
addons_test.go:413: (dbg) Run:  kubectl --context addons-445250 top pods -n kube-system
addons_test.go:413: (dbg) Non-zero exit: kubectl --context addons-445250 top pods -n kube-system: exit status 1 (61.953164ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-fx58w, age: 16m25.335190371s

                                                
                                                
** /stderr **
addons_test.go:427: failed checking metric server: exit status 1
addons_test.go:430: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable metrics-server --alsologtostderr -v=1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-445250
helpers_test.go:235: (dbg) docker inspect addons-445250:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de",
	        "Created": "2024-09-23T10:22:07.858444399Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 12702,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-09-23T10:22:07.992183864Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:d94335c0cd164ddebb3c5158e317bcf6d2e08dc08f448d25251f425acb842829",
	        "ResolvConfPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hostname",
	        "HostsPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/hosts",
	        "LogPath": "/var/lib/docker/containers/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de/13e368cd79e9d454e98de5a9cff5f0313d5870383f0fb5ba461690974f64d8de-json.log",
	        "Name": "/addons-445250",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-445250:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {
	                    "max-size": "100m"
	                }
	            },
	            "NetworkMode": "addons-445250",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9-init/diff:/var/lib/docker/overlay2/7d643569ae4970466837c9a65113e736da4066b6ecef95c8dfd4e28343439fd4/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1f8a71aa8244c18f7fa961d18bc719b2df405f1b269f136ff90fad7264f8c0b9/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-445250",
	                "Source": "/var/lib/docker/volumes/addons-445250/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-445250",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-445250",
	                "name.minikube.sigs.k8s.io": "addons-445250",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "11702f683be50ee88e7771ed6cf42c56a8b968ee9233079204792fc15e16ca3a",
	            "SandboxKey": "/var/run/docker/netns/11702f683be5",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32768"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32769"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32772"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32770"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "32771"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-445250": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "6e9e6c600c8a794f7091380417d6269c6bcfab6c9ff820d67e47faecc18d66e9",
	                    "EndpointID": "e2a135f221a1a3480c5eff902d6dc55c09d0804810f708c60a366ec74feb8c19",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-445250",
	                        "13e368cd79e9"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-445250 -n addons-445250
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 logs -n 25: (1.180220837s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-662224                                                                     | download-only-662224   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | download-docker-581243                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-581243                                                                   | download-docker-581243 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | binary-mirror-083835                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:40991                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-083835                                                                     | binary-mirror-083835   | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| addons  | enable dashboard -p                                                                         | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-445250 --wait=true                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:25 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC | 23 Sep 24 10:33 UTC |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh curl -s                                                                   | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:33 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-445250 ssh cat                                                                       | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | /opt/local-path-provisioner/pvc-f2f3f271-6db1-4176-931b-e93dd714c1c9_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:35 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ip      | addons-445250 ip                                                                            | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | -p addons-445250                                                                            |                        |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | addons-445250                                                                               |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | -p addons-445250                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:34 UTC | 23 Sep 24 10:34 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-445250 ip                                                                            | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-445250 addons disable                                                                | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:36 UTC | 23 Sep 24 10:36 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-445250 addons                                                                        | addons-445250          | jenkins | v1.34.0 | 23 Sep 24 10:38 UTC | 23 Sep 24 10:38 UTC |
	|         | disable metrics-server                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:46
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:46.722935   11967 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:46.723042   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723048   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:46.723052   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:46.723211   11967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:21:46.723833   11967 out.go:352] Setting JSON to false
	I0923 10:21:46.724726   11967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":251,"bootTime":1727086656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:46.724818   11967 start.go:139] virtualization: kvm guest
	I0923 10:21:46.726917   11967 out.go:177] * [addons-445250] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:46.728496   11967 notify.go:220] Checking for updates...
	I0923 10:21:46.728529   11967 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:21:46.730127   11967 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:46.731529   11967 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:21:46.733032   11967 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:21:46.734520   11967 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:21:46.735940   11967 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:21:46.737437   11967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:46.757864   11967 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:21:46.757943   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.804617   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.795429084 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.804761   11967 docker.go:318] overlay module found
	I0923 10:21:46.807023   11967 out.go:177] * Using the docker driver based on user configuration
	I0923 10:21:46.808457   11967 start.go:297] selected driver: docker
	I0923 10:21:46.808470   11967 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:46.808480   11967 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:21:46.809252   11967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:46.853138   11967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:26 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:46.844831844 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:46.853280   11967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:46.853569   11967 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:21:46.855475   11967 out.go:177] * Using Docker driver with root privileges
	I0923 10:21:46.856837   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:21:46.856896   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:21:46.856908   11967 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:46.856965   11967 start.go:340] cluster config:
	{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:46.858565   11967 out.go:177] * Starting "addons-445250" primary control-plane node in "addons-445250" cluster
	I0923 10:21:46.859951   11967 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 10:21:46.861523   11967 out.go:177] * Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:46.862889   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:46.862932   11967 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:46.862943   11967 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:46.862994   11967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:46.863034   11967 preload.go:172] Found /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0923 10:21:46.863044   11967 cache.go:59] Finished verifying existence of preloaded tar for v1.31.1 on crio
	I0923 10:21:46.863345   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:21:46.863370   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json: {Name:mk54c5258400406bc02a0be01645830e04ed3533 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:21:46.878981   11967 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:46.879106   11967 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:46.879123   11967 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:46.879127   11967 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:46.879134   11967 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:46.879141   11967 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from local cache
	I0923 10:21:59.079658   11967 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed from cached tarball
	I0923 10:21:59.079699   11967 cache.go:194] Successfully downloaded all kic artifacts
	I0923 10:21:59.079749   11967 start.go:360] acquireMachinesLock for addons-445250: {Name:mk58626d6fa4f17f6f629476491054fee819afac Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0923 10:21:59.079854   11967 start.go:364] duration metric: took 81.967µs to acquireMachinesLock for "addons-445250"
	I0923 10:21:59.079884   11967 start.go:93] Provisioning new machine with config: &{Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:21:59.079961   11967 start.go:125] createHost starting for "" (driver="docker")
	I0923 10:21:59.082680   11967 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I0923 10:21:59.082908   11967 start.go:159] libmachine.API.Create for "addons-445250" (driver="docker")
	I0923 10:21:59.082939   11967 client.go:168] LocalClient.Create starting
	I0923 10:21:59.083053   11967 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem
	I0923 10:21:59.283728   11967 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem
	I0923 10:21:59.338041   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W0923 10:21:59.353789   11967 cli_runner.go:211] docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I0923 10:21:59.353863   11967 network_create.go:284] running [docker network inspect addons-445250] to gather additional debugging logs...
	I0923 10:21:59.353885   11967 cli_runner.go:164] Run: docker network inspect addons-445250
	W0923 10:21:59.368954   11967 cli_runner.go:211] docker network inspect addons-445250 returned with exit code 1
	I0923 10:21:59.368983   11967 network_create.go:287] error running [docker network inspect addons-445250]: docker network inspect addons-445250: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-445250 not found
	I0923 10:21:59.368994   11967 network_create.go:289] output of [docker network inspect addons-445250]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-445250 not found
	
	** /stderr **
	I0923 10:21:59.369064   11967 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:21:59.384645   11967 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001b467a0}
	I0923 10:21:59.384701   11967 network_create.go:124] attempt to create docker network addons-445250 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I0923 10:21:59.384762   11967 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-445250 addons-445250
	I0923 10:21:59.445035   11967 network_create.go:108] docker network addons-445250 192.168.49.0/24 created
	I0923 10:21:59.445065   11967 kic.go:121] calculated static IP "192.168.49.2" for the "addons-445250" container
	I0923 10:21:59.445131   11967 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I0923 10:21:59.460629   11967 cli_runner.go:164] Run: docker volume create addons-445250 --label name.minikube.sigs.k8s.io=addons-445250 --label created_by.minikube.sigs.k8s.io=true
	I0923 10:21:59.476907   11967 oci.go:103] Successfully created a docker volume addons-445250
	I0923 10:21:59.476979   11967 cli_runner.go:164] Run: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib
	I0923 10:22:03.434642   11967 cli_runner.go:217] Completed: docker run --rm --name addons-445250-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --entrypoint /usr/bin/test -v addons-445250:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -d /var/lib: (3.957618145s)
	I0923 10:22:03.434674   11967 oci.go:107] Successfully prepared a docker volume addons-445250
	I0923 10:22:03.434699   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:03.434718   11967 kic.go:194] Starting extracting preloaded images to volume ...
	I0923 10:22:03.434769   11967 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir
	I0923 10:22:07.800698   11967 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4:/preloaded.tar:ro -v addons-445250:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed -I lz4 -xf /preloaded.tar -C /extractDir: (4.365884505s)
	I0923 10:22:07.800727   11967 kic.go:203] duration metric: took 4.366005266s to extract preloaded images to volume ...
	W0923 10:22:07.800860   11967 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I0923 10:22:07.800985   11967 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I0923 10:22:07.843740   11967 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-445250 --name addons-445250 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-445250 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-445250 --network addons-445250 --ip 192.168.49.2 --volume addons-445250:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed
	I0923 10:22:08.145428   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Running}}
	I0923 10:22:08.163069   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.180280   11967 cli_runner.go:164] Run: docker exec addons-445250 stat /var/lib/dpkg/alternatives/iptables
	I0923 10:22:08.223991   11967 oci.go:144] the created container "addons-445250" has a running status.
	I0923 10:22:08.224039   11967 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa...
	I0923 10:22:08.349744   11967 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I0923 10:22:08.370308   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.394245   11967 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I0923 10:22:08.394268   11967 kic_runner.go:114] Args: [docker exec --privileged addons-445250 chown docker:docker /home/docker/.ssh/authorized_keys]
	I0923 10:22:08.436001   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:08.455362   11967 machine.go:93] provisionDockerMachine start ...
	I0923 10:22:08.455457   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:08.480578   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:08.480844   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:08.480858   11967 main.go:141] libmachine: About to run SSH command:
	hostname
	I0923 10:22:08.481650   11967 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:44746->127.0.0.1:32768: read: connection reset by peer
	I0923 10:22:11.613107   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.613148   11967 ubuntu.go:169] provisioning hostname "addons-445250"
	I0923 10:22:11.613220   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.632203   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.632375   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.632389   11967 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-445250 && echo "addons-445250" | sudo tee /etc/hostname
	I0923 10:22:11.772148   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-445250
	
	I0923 10:22:11.772239   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:11.793347   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:11.793545   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:11.793571   11967 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-445250' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-445250/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-445250' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0923 10:22:11.921432   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0923 10:22:11.921466   11967 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/19689-3772/.minikube CaCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/19689-3772/.minikube}
	I0923 10:22:11.921533   11967 ubuntu.go:177] setting up certificates
	I0923 10:22:11.921552   11967 provision.go:84] configureAuth start
	I0923 10:22:11.921640   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:11.937581   11967 provision.go:143] copyHostCerts
	I0923 10:22:11.937653   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/key.pem (1679 bytes)
	I0923 10:22:11.937757   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/ca.pem (1082 bytes)
	I0923 10:22:11.937816   11967 exec_runner.go:151] cp: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/19689-3772/.minikube/cert.pem (1123 bytes)
	I0923 10:22:11.937865   11967 provision.go:117] generating server cert: /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem org=jenkins.addons-445250 san=[127.0.0.1 192.168.49.2 addons-445250 localhost minikube]
	I0923 10:22:12.190566   11967 provision.go:177] copyRemoteCerts
	I0923 10:22:12.190629   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0923 10:22:12.190662   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.207913   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.301604   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0923 10:22:12.323506   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0923 10:22:12.345626   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0923 10:22:12.366986   11967 provision.go:87] duration metric: took 445.417004ms to configureAuth
	I0923 10:22:12.367016   11967 ubuntu.go:193] setting minikube options for container-runtime
	I0923 10:22:12.367177   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:12.367273   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.384149   11967 main.go:141] libmachine: Using SSH client type: native
	I0923 10:22:12.384351   11967 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x864a40] 0x867720 <nil>  [] 0s} 127.0.0.1 32768 <nil> <nil>}
	I0923 10:22:12.384365   11967 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0923 10:22:12.601161   11967 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0923 10:22:12.601191   11967 machine.go:96] duration metric: took 4.145798692s to provisionDockerMachine
	I0923 10:22:12.601205   11967 client.go:171] duration metric: took 13.518254951s to LocalClient.Create
	I0923 10:22:12.601232   11967 start.go:167] duration metric: took 13.518321061s to libmachine.API.Create "addons-445250"
	I0923 10:22:12.601243   11967 start.go:293] postStartSetup for "addons-445250" (driver="docker")
	I0923 10:22:12.601256   11967 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0923 10:22:12.601330   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0923 10:22:12.601386   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.617703   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.710189   11967 ssh_runner.go:195] Run: cat /etc/os-release
	I0923 10:22:12.713341   11967 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I0923 10:22:12.713372   11967 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I0923 10:22:12.713380   11967 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I0923 10:22:12.713387   11967 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I0923 10:22:12.713396   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/addons for local assets ...
	I0923 10:22:12.713453   11967 filesync.go:126] Scanning /home/jenkins/minikube-integration/19689-3772/.minikube/files for local assets ...
	I0923 10:22:12.713475   11967 start.go:296] duration metric: took 112.225945ms for postStartSetup
	I0923 10:22:12.713792   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.730492   11967 profile.go:143] Saving config to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/config.json ...
	I0923 10:22:12.730768   11967 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:22:12.730831   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.747370   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.837980   11967 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I0923 10:22:12.841809   11967 start.go:128] duration metric: took 13.761835585s to createHost
	I0923 10:22:12.841831   11967 start.go:83] releasing machines lock for "addons-445250", held for 13.76196327s
	I0923 10:22:12.841880   11967 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-445250
	I0923 10:22:12.857765   11967 ssh_runner.go:195] Run: cat /version.json
	I0923 10:22:12.857812   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.857826   11967 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0923 10:22:12.857890   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:12.875001   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:12.875855   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:13.035237   11967 ssh_runner.go:195] Run: systemctl --version
	I0923 10:22:13.039237   11967 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0923 10:22:13.175392   11967 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0923 10:22:13.179320   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.195856   11967 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I0923 10:22:13.195931   11967 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0923 10:22:13.221316   11967 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I0923 10:22:13.221364   11967 start.go:495] detecting cgroup driver to use...
	I0923 10:22:13.221399   11967 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I0923 10:22:13.221447   11967 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0923 10:22:13.235209   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0923 10:22:13.245258   11967 docker.go:217] disabling cri-docker service (if available) ...
	I0923 10:22:13.245304   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0923 10:22:13.257110   11967 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0923 10:22:13.270190   11967 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0923 10:22:13.345987   11967 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0923 10:22:13.431095   11967 docker.go:233] disabling docker service ...
	I0923 10:22:13.431158   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0923 10:22:13.448504   11967 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0923 10:22:13.459326   11967 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0923 10:22:13.538609   11967 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0923 10:22:13.627128   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0923 10:22:13.637297   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0923 10:22:13.651328   11967 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I0923 10:22:13.651409   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.660149   11967 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0923 10:22:13.660207   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.668833   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.677566   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.686751   11967 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0923 10:22:13.695283   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.704095   11967 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.718346   11967 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I0923 10:22:13.727226   11967 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0923 10:22:13.734826   11967 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0923 10:22:13.734883   11967 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0923 10:22:13.747287   11967 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0923 10:22:13.755093   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:13.829252   11967 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0923 10:22:14.158226   11967 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I0923 10:22:14.158294   11967 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0923 10:22:14.161542   11967 start.go:563] Will wait 60s for crictl version
	I0923 10:22:14.161588   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:22:14.164545   11967 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0923 10:22:14.194967   11967 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I0923 10:22:14.195073   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.228259   11967 ssh_runner.go:195] Run: crio --version
	I0923 10:22:14.262832   11967 out.go:177] * Preparing Kubernetes v1.31.1 on CRI-O 1.24.6 ...
	I0923 10:22:14.264297   11967 cli_runner.go:164] Run: docker network inspect addons-445250 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I0923 10:22:14.279971   11967 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I0923 10:22:14.283271   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.293146   11967 kubeadm.go:883] updating cluster {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0923 10:22:14.293287   11967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:22:14.293343   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.352262   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.352284   11967 crio.go:433] Images already preloaded, skipping extraction
	I0923 10:22:14.352323   11967 ssh_runner.go:195] Run: sudo crictl images --output json
	I0923 10:22:14.382541   11967 crio.go:514] all images are preloaded for cri-o runtime.
	I0923 10:22:14.382561   11967 cache_images.go:84] Images are preloaded, skipping loading
	I0923 10:22:14.382568   11967 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.1 crio true true} ...
	I0923 10:22:14.382655   11967 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.1/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-445250 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0923 10:22:14.382713   11967 ssh_runner.go:195] Run: crio config
	I0923 10:22:14.424280   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:14.424300   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:14.424309   11967 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0923 10:22:14.424330   11967 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.1 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-445250 NodeName:addons-445250 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0923 10:22:14.424465   11967 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-445250"
	  kubeletExtraArgs:
	    node-ip: 192.168.49.2
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.1
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0923 10:22:14.424518   11967 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.1
	I0923 10:22:14.432810   11967 binaries.go:44] Found k8s binaries, skipping transfer
	I0923 10:22:14.432882   11967 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0923 10:22:14.440979   11967 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I0923 10:22:14.456846   11967 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0923 10:22:14.473092   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0923 10:22:14.489063   11967 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I0923 10:22:14.492280   11967 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0923 10:22:14.502541   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:14.581826   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:14.594096   11967 certs.go:68] Setting up /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250 for IP: 192.168.49.2
	I0923 10:22:14.594120   11967 certs.go:194] generating shared ca certs ...
	I0923 10:22:14.594140   11967 certs.go:226] acquiring lock for ca certs: {Name:mkbb719d992584afad4bc806b595dfbc8bf85283 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.594259   11967 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key
	I0923 10:22:14.681658   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt ...
	I0923 10:22:14.681683   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt: {Name:mk1f9f53ba20e5a2662fcdac9037bc6a4a8fd1b0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681837   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key ...
	I0923 10:22:14.681847   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key: {Name:mk52ffe2b2a53346768d26bc1f6d2740c4fc9ca2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.681914   11967 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key
	I0923 10:22:14.764606   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt ...
	I0923 10:22:14.764633   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt: {Name:mk8f4a9df3471bb1b7cc77d68850cb5575be1691 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764782   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key ...
	I0923 10:22:14.764793   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key: {Name:mk637c0032a7e0b43519628027243d2c0d2d6b64 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:14.764855   11967 certs.go:256] generating profile certs ...
	I0923 10:22:14.764906   11967 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key
	I0923 10:22:14.764920   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt with IP's: []
	I0923 10:22:15.005422   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt ...
	I0923 10:22:15.005450   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: {Name:mk4bd69aa7022da3f588d449215ad314ecdb2eb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005608   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key ...
	I0923 10:22:15.005620   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.key: {Name:mkae46f7c7acf2efdeeb48926276ca9bf1fec02a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.005682   11967 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa
	I0923 10:22:15.005699   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I0923 10:22:15.404464   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa ...
	I0923 10:22:15.404496   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa: {Name:mk8def3abfe8729e739e9892b8e2dfdfaa975e77 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404648   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa ...
	I0923 10:22:15.404661   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa: {Name:mkc1cfd8e1a6b6ba70edb50de4cc7a2de96fef4f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.404730   11967 certs.go:381] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt
	I0923 10:22:15.404821   11967 certs.go:385] copying /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key.3041dcfa -> /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key
	I0923 10:22:15.404875   11967 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key
	I0923 10:22:15.404901   11967 crypto.go:68] Generating cert /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt with IP's: []
	I0923 10:22:15.857985   11967 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt ...
	I0923 10:22:15.858015   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt: {Name:mk3ebea646b11f719e3aafe05a2859ab48c62804 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858201   11967 crypto.go:164] Writing key to /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key ...
	I0923 10:22:15.858218   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key: {Name:mk16575e201f9fd127e621495ba0c5bc4e64a79f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:15.858432   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca-key.pem (1679 bytes)
	I0923 10:22:15.858477   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/ca.pem (1082 bytes)
	I0923 10:22:15.858514   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/cert.pem (1123 bytes)
	I0923 10:22:15.858544   11967 certs.go:484] found cert: /home/jenkins/minikube-integration/19689-3772/.minikube/certs/key.pem (1679 bytes)
	I0923 10:22:15.859128   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0923 10:22:15.880908   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0923 10:22:15.902039   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0923 10:22:15.923187   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0923 10:22:15.944338   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0923 10:22:15.965387   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0923 10:22:15.986458   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0923 10:22:16.007433   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0923 10:22:16.028442   11967 ssh_runner.go:362] scp /home/jenkins/minikube-integration/19689-3772/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0923 10:22:16.050349   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0923 10:22:16.067334   11967 ssh_runner.go:195] Run: openssl version
	I0923 10:22:16.072904   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0923 10:22:16.081554   11967 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084699   11967 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Sep 23 10:22 /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.084740   11967 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0923 10:22:16.091121   11967 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0923 10:22:16.099776   11967 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0923 10:22:16.102849   11967 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0923 10:22:16.102894   11967 kubeadm.go:392] StartCluster: {Name:addons-445250 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:addons-445250 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:22:16.102966   11967 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0923 10:22:16.103005   11967 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0923 10:22:16.135153   11967 cri.go:89] found id: ""
	I0923 10:22:16.135208   11967 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0923 10:22:16.143317   11967 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0923 10:22:16.151131   11967 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I0923 10:22:16.151185   11967 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0923 10:22:16.158804   11967 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0923 10:22:16.158823   11967 kubeadm.go:157] found existing configuration files:
	
	I0923 10:22:16.158860   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0923 10:22:16.166353   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0923 10:22:16.166422   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0923 10:22:16.174207   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0923 10:22:16.181623   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0923 10:22:16.181684   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0923 10:22:16.189018   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.196505   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0923 10:22:16.196565   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0923 10:22:16.204028   11967 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0923 10:22:16.211652   11967 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0923 10:22:16.211714   11967 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0923 10:22:16.218868   11967 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.1:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I0923 10:22:16.253475   11967 kubeadm.go:310] [init] Using Kubernetes version: v1.31.1
	I0923 10:22:16.254017   11967 kubeadm.go:310] [preflight] Running pre-flight checks
	I0923 10:22:16.269754   11967 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I0923 10:22:16.269837   11967 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1069-gcp
	I0923 10:22:16.269871   11967 kubeadm.go:310] OS: Linux
	I0923 10:22:16.269959   11967 kubeadm.go:310] CGROUPS_CPU: enabled
	I0923 10:22:16.270050   11967 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I0923 10:22:16.270128   11967 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I0923 10:22:16.270202   11967 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I0923 10:22:16.270274   11967 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I0923 10:22:16.270360   11967 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I0923 10:22:16.270417   11967 kubeadm.go:310] CGROUPS_PIDS: enabled
	I0923 10:22:16.270469   11967 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I0923 10:22:16.270521   11967 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I0923 10:22:16.318273   11967 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0923 10:22:16.318402   11967 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0923 10:22:16.318562   11967 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0923 10:22:16.324445   11967 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0923 10:22:16.327392   11967 out.go:235]   - Generating certificates and keys ...
	I0923 10:22:16.327503   11967 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0923 10:22:16.327598   11967 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0923 10:22:16.461803   11967 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0923 10:22:16.741266   11967 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0923 10:22:16.849130   11967 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0923 10:22:17.176671   11967 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0923 10:22:17.429269   11967 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0923 10:22:17.429471   11967 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.596676   11967 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0923 10:22:17.596789   11967 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-445250 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I0923 10:22:17.788256   11967 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0923 10:22:17.876354   11967 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0923 10:22:18.471196   11967 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0923 10:22:18.471297   11967 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0923 10:22:18.730115   11967 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0923 10:22:18.932151   11967 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0923 10:22:19.024826   11967 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0923 10:22:19.144008   11967 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0923 10:22:19.259815   11967 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0923 10:22:19.260334   11967 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0923 10:22:19.262678   11967 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0923 10:22:19.264869   11967 out.go:235]   - Booting up control plane ...
	I0923 10:22:19.265001   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0923 10:22:19.265096   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0923 10:22:19.265162   11967 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0923 10:22:19.273358   11967 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0923 10:22:19.278617   11967 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0923 10:22:19.278696   11967 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0923 10:22:19.355589   11967 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0923 10:22:19.355691   11967 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0923 10:22:19.857077   11967 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.535465ms
	I0923 10:22:19.857205   11967 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0923 10:22:24.859100   11967 kubeadm.go:310] [api-check] The API server is healthy after 5.002044714s
	I0923 10:22:24.871015   11967 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0923 10:22:24.881606   11967 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0923 10:22:24.899928   11967 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0923 10:22:24.900246   11967 kubeadm.go:310] [mark-control-plane] Marking the node addons-445250 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0923 10:22:24.910024   11967 kubeadm.go:310] [bootstrap-token] Using token: tzcr7c.qy08ihjpsu8woy77
	I0923 10:22:24.911692   11967 out.go:235]   - Configuring RBAC rules ...
	I0923 10:22:24.911836   11967 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0923 10:22:24.914938   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0923 10:22:24.920963   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0923 10:22:24.923728   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0923 10:22:24.926249   11967 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0923 10:22:24.929913   11967 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0923 10:22:25.266487   11967 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0923 10:22:25.686706   11967 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0923 10:22:26.267074   11967 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0923 10:22:26.268161   11967 kubeadm.go:310] 
	I0923 10:22:26.268232   11967 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0923 10:22:26.268246   11967 kubeadm.go:310] 
	I0923 10:22:26.268333   11967 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0923 10:22:26.268349   11967 kubeadm.go:310] 
	I0923 10:22:26.268371   11967 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0923 10:22:26.268443   11967 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0923 10:22:26.268498   11967 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0923 10:22:26.268503   11967 kubeadm.go:310] 
	I0923 10:22:26.268548   11967 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0923 10:22:26.268555   11967 kubeadm.go:310] 
	I0923 10:22:26.268595   11967 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0923 10:22:26.268602   11967 kubeadm.go:310] 
	I0923 10:22:26.268680   11967 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0923 10:22:26.268775   11967 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0923 10:22:26.268850   11967 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0923 10:22:26.268858   11967 kubeadm.go:310] 
	I0923 10:22:26.268962   11967 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0923 10:22:26.269039   11967 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0923 10:22:26.269050   11967 kubeadm.go:310] 
	I0923 10:22:26.269125   11967 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269229   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e \
	I0923 10:22:26.269251   11967 kubeadm.go:310] 	--control-plane 
	I0923 10:22:26.269256   11967 kubeadm.go:310] 
	I0923 10:22:26.269371   11967 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0923 10:22:26.269381   11967 kubeadm.go:310] 
	I0923 10:22:26.269476   11967 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token tzcr7c.qy08ihjpsu8woy77 \
	I0923 10:22:26.269658   11967 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:122e8e80e5d252d0370d2ad3bf07440a5ae64df4281d54e7d14ffb6b148b696e 
	I0923 10:22:26.271764   11967 kubeadm.go:310] W0923 10:22:16.250858    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "ClusterConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272087   11967 kubeadm.go:310] W0923 10:22:16.251517    1303 common.go:101] your configuration file uses a deprecated API spec: "kubeadm.k8s.io/v1beta3" (kind: "InitConfiguration"). Please use 'kubeadm config migrate --old-config old.yaml --new-config new.yaml', which will write the new, similar spec using a newer API version.
	I0923 10:22:26.272386   11967 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1069-gcp\n", err: exit status 1
	I0923 10:22:26.272539   11967 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0923 10:22:26.272573   11967 cni.go:84] Creating CNI manager for ""
	I0923 10:22:26.272586   11967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:22:26.274792   11967 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0923 10:22:26.276289   11967 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0923 10:22:26.279952   11967 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.1/kubectl ...
	I0923 10:22:26.279967   11967 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0923 10:22:26.296902   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0923 10:22:26.488183   11967 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:26.488297   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-445250 minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986 minikube.k8s.io/name=addons-445250 minikube.k8s.io/primary=true
	I0923 10:22:26.495506   11967 ops.go:34] apiserver oom_adj: -16
	I0923 10:22:26.569029   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.069137   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:27.570004   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.069287   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:28.569763   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.069617   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:29.569788   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.069539   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.569838   11967 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.1/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0923 10:22:30.639983   11967 kubeadm.go:1113] duration metric: took 4.1517431s to wait for elevateKubeSystemPrivileges
	I0923 10:22:30.640014   11967 kubeadm.go:394] duration metric: took 14.537124377s to StartCluster
	I0923 10:22:30.640032   11967 settings.go:142] acquiring lock: {Name:mk872f1d275188f797c9a12c8098849cd4e5cab3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640127   11967 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:22:30.640473   11967 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/19689-3772/kubeconfig: {Name:mk157cbe356b4d3a0ed9cd6c04752524343ac891 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0923 10:22:30.640639   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0923 10:22:30.640656   11967 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0923 10:22:30.640716   11967 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I0923 10:22:30.640836   11967 addons.go:69] Setting yakd=true in profile "addons-445250"
	I0923 10:22:30.640848   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.640862   11967 addons.go:234] Setting addon yakd=true in "addons-445250"
	I0923 10:22:30.640856   11967 addons.go:69] Setting ingress-dns=true in profile "addons-445250"
	I0923 10:22:30.640883   11967 addons.go:234] Setting addon ingress-dns=true in "addons-445250"
	I0923 10:22:30.640892   11967 addons.go:69] Setting gcp-auth=true in profile "addons-445250"
	I0923 10:22:30.640895   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640889   11967 addons.go:69] Setting default-storageclass=true in profile "addons-445250"
	I0923 10:22:30.640909   11967 mustload.go:65] Loading cluster: addons-445250
	I0923 10:22:30.640900   11967 addons.go:69] Setting cloud-spanner=true in profile "addons-445250"
	I0923 10:22:30.640917   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-445250"
	I0923 10:22:30.640906   11967 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-445250"
	I0923 10:22:30.640934   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640949   11967 addons.go:69] Setting registry=true in profile "addons-445250"
	I0923 10:22:30.640966   11967 addons.go:234] Setting addon registry=true in "addons-445250"
	I0923 10:22:30.640971   11967 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:30.640995   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.640999   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641053   11967 config.go:182] Loaded profile config "addons-445250": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:22:30.641257   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641280   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641366   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641379   11967 addons.go:69] Setting inspektor-gadget=true in profile "addons-445250"
	I0923 10:22:30.641392   11967 addons.go:234] Setting addon inspektor-gadget=true in "addons-445250"
	I0923 10:22:30.641415   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641426   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641435   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.641870   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642158   11967 addons.go:69] Setting ingress=true in profile "addons-445250"
	I0923 10:22:30.642181   11967 addons.go:234] Setting addon ingress=true in "addons-445250"
	I0923 10:22:30.642196   11967 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-445250"
	I0923 10:22:30.642213   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642215   11967 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-445250"
	I0923 10:22:30.642394   11967 addons.go:69] Setting volcano=true in profile "addons-445250"
	I0923 10:22:30.642518   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642544   11967 addons.go:234] Setting addon volcano=true in "addons-445250"
	I0923 10:22:30.642576   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642679   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.642705   11967 addons.go:69] Setting volumesnapshots=true in profile "addons-445250"
	I0923 10:22:30.642720   11967 addons.go:234] Setting addon volumesnapshots=true in "addons-445250"
	I0923 10:22:30.642741   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.642878   11967 addons.go:69] Setting metrics-server=true in profile "addons-445250"
	I0923 10:22:30.642900   11967 addons.go:234] Setting addon metrics-server=true in "addons-445250"
	I0923 10:22:30.642925   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.641369   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.640934   11967 addons.go:234] Setting addon cloud-spanner=true in "addons-445250"
	I0923 10:22:30.642998   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643036   11967 addons.go:69] Setting storage-provisioner=true in profile "addons-445250"
	I0923 10:22:30.643061   11967 addons.go:234] Setting addon storage-provisioner=true in "addons-445250"
	I0923 10:22:30.643085   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.643519   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.648041   11967 out.go:177] * Verifying Kubernetes components...
	I0923 10:22:30.648388   11967 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-445250"
	I0923 10:22:30.648409   11967 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-445250"
	I0923 10:22:30.648446   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.648949   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650296   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.650451   11967 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0923 10:22:30.665893   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666046   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.666054   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.667975   11967 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I0923 10:22:30.669562   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0923 10:22:30.669584   11967 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0923 10:22:30.669644   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.685327   11967 addons.go:234] Setting addon default-storageclass=true in "addons-445250"
	I0923 10:22:30.685375   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.685862   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.686064   11967 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I0923 10:22:30.687257   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.687775   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:30.687828   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0923 10:22:30.687898   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.698483   11967 out.go:177]   - Using image docker.io/registry:2.8.3
	I0923 10:22:30.700153   11967 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.6
	I0923 10:22:30.702006   11967 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I0923 10:22:30.702025   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I0923 10:22:30.702098   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.711515   11967 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.32.0
	I0923 10:22:30.713056   11967 addons.go:431] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0923 10:22:30.713078   11967 ssh_runner.go:362] scp inspektor-gadget/ig-namespace.yaml --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0923 10:22:30.713145   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.714581   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0923 10:22:30.717305   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0923 10:22:30.718944   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0923 10:22:30.720619   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0923 10:22:30.722483   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0923 10:22:30.722584   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0923 10:22:30.724093   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0923 10:22:30.724118   11967 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0923 10:22:30.724177   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.724581   11967 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.16.2
	I0923 10:22:30.726222   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0923 10:22:30.726536   11967 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:30.726554   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0923 10:22:30.726622   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.730002   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0923 10:22:30.732162   11967 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0923 10:22:30.733954   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0923 10:22:30.733974   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0923 10:22:30.734033   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.741467   11967 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-445250"
	I0923 10:22:30.741546   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:30.742079   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:30.744624   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.747096   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.2
	I0923 10:22:30.749664   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.757242   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:30.758538   11967 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I0923 10:22:30.759791   11967 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:30.759812   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0923 10:22:30.759870   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.760711   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0923 10:22:30.760738   11967 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0923 10:22:30.760797   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.757266   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.770117   11967 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.24
	I0923 10:22:30.770190   11967 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	W0923 10:22:30.772541   11967 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I0923 10:22:30.774529   11967 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:30.774556   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0923 10:22:30.774621   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.775602   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:30.775624   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0923 10:22:30.775674   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.787036   11967 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0923 10:22:30.787436   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.789020   11967 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:30.789037   11967 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0923 10:22:30.789092   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.790579   11967 out.go:177]   - Using image docker.io/busybox:stable
	I0923 10:22:30.792232   11967 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:30.792253   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0923 10:22:30.792310   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:30.799243   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.802426   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.804649   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.807350   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.811528   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815735   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.815938   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.817779   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.818866   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:30.821325   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	W0923 10:22:30.833869   11967 sshutil.go:64] dial failure (will retry): ssh: handshake failed: EOF
	I0923 10:22:30.833911   11967 retry.go:31] will retry after 251.502566ms: ssh: handshake failed: EOF
	I0923 10:22:30.930840   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0923 10:22:31.038430   11967 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0923 10:22:31.130020   11967 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I0923 10:22:31.130106   11967 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0923 10:22:31.148662   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0923 10:22:31.148713   11967 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0923 10:22:31.247685   11967 addons.go:431] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0923 10:22:31.247721   11967 ssh_runner.go:362] scp inspektor-gadget/ig-serviceaccount.yaml --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0923 10:22:31.329027   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0923 10:22:31.329056   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0923 10:22:31.329202   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0923 10:22:31.329335   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0923 10:22:31.329470   11967 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.329484   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0923 10:22:31.339924   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0923 10:22:31.339949   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0923 10:22:31.342429   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0923 10:22:31.345422   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0923 10:22:31.346167   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0923 10:22:31.346186   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0923 10:22:31.437032   11967 addons.go:431] installing /etc/kubernetes/addons/ig-role.yaml
	I0923 10:22:31.437063   11967 ssh_runner.go:362] scp inspektor-gadget/ig-role.yaml --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0923 10:22:31.440413   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0923 10:22:31.445193   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0923 10:22:31.527011   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0923 10:22:31.527097   11967 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0923 10:22:31.546462   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0923 10:22:31.627051   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0923 10:22:31.627133   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0923 10:22:31.627825   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0923 10:22:31.627867   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0923 10:22:31.630751   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0923 10:22:31.630811   11967 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0923 10:22:31.635963   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0923 10:22:31.636203   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0923 10:22:31.636238   11967 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0923 10:22:31.639336   11967 addons.go:431] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0923 10:22:31.639357   11967 ssh_runner.go:362] scp inspektor-gadget/ig-rolebinding.yaml --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0923 10:22:31.826246   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0923 10:22:31.826334   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0923 10:22:31.826554   11967 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:31.826604   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0923 10:22:31.827508   11967 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:31.827551   11967 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0923 10:22:31.930152   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0923 10:22:31.930233   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0923 10:22:31.946738   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0923 10:22:31.946849   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrole.yaml --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0923 10:22:32.031166   11967 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0923 10:22:32.031251   11967 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0923 10:22:32.038777   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0923 10:22:32.046798   11967 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.1/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.115916979s)
	I0923 10:22:32.046977   11967 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I0923 10:22:32.046913   11967 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.008451247s)
	I0923 10:22:32.048977   11967 node_ready.go:35] waiting up to 6m0s for node "addons-445250" to be "Ready" ...
	I0923 10:22:32.127879   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0923 10:22:32.239522   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0923 10:22:32.239604   11967 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0923 10:22:32.329601   11967 addons.go:431] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0923 10:22:32.329684   11967 ssh_runner.go:362] scp inspektor-gadget/ig-clusterrolebinding.yaml --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0923 10:22:32.343924   11967 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0923 10:22:32.344005   11967 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0923 10:22:32.445635   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.11638761s)
	I0923 10:22:32.633750   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0923 10:22:32.633833   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0923 10:22:32.638879   11967 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:32.638905   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0923 10:22:32.645923   11967 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I0923 10:22:32.645948   11967 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0923 10:22:32.649688   11967 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-445250" context rescaled to 1 replicas
	I0923 10:22:32.949369   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0923 10:22:32.949444   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0923 10:22:33.033520   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (1.704142355s)
	I0923 10:22:33.227536   11967 addons.go:431] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.227561   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0923 10:22:33.239664   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:33.427133   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0923 10:22:33.427213   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0923 10:22:33.532321   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0923 10:22:33.727155   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0923 10:22:33.727249   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0923 10:22:34.046873   11967 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:34.046911   11967 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0923 10:22:34.149845   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:34.233755   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0923 10:22:35.227770   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (3.885253813s)
	I0923 10:22:35.227973   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (3.882480333s)
	I0923 10:22:36.338824   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (4.898369608s)
	I0923 10:22:36.338860   11967 addons.go:475] Verifying addon ingress=true in "addons-445250"
	I0923 10:22:36.339007   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (4.792498678s)
	I0923 10:22:36.339038   11967 addons.go:475] Verifying addon registry=true in "addons-445250"
	I0923 10:22:36.339090   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.703092374s)
	I0923 10:22:36.338954   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.893724471s)
	I0923 10:22:36.339140   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (4.300273199s)
	I0923 10:22:36.339204   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (4.211246073s)
	I0923 10:22:36.340459   11967 addons.go:475] Verifying addon metrics-server=true in "addons-445250"
	I0923 10:22:36.340949   11967 out.go:177] * Verifying registry addon...
	I0923 10:22:36.340964   11967 out.go:177] * Verifying ingress addon...
	I0923 10:22:36.342063   11967 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-445250 service yakd-dashboard -n yakd-dashboard
	
	I0923 10:22:36.343731   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0923 10:22:36.343939   11967 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0923 10:22:36.348954   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:22:36.348976   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.350483   11967 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0923 10:22:36.350503   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:36.554792   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:36.933962   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:36.937073   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.041012   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.801300714s)
	W0923 10:22:37.041067   11967 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041095   11967 retry.go:31] will retry after 370.601258ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0923 10:22:37.041141   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (3.508711885s)
	I0923 10:22:37.291210   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (3.057397179s)
	I0923 10:22:37.291243   11967 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-445250"
	I0923 10:22:37.293123   11967 out.go:177] * Verifying csi-hostpath-driver addon...
	I0923 10:22:37.295283   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0923 10:22:37.330959   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:22:37.330988   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.411870   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0923 10:22:37.431984   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.432434   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.799504   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:37.846652   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:37.847318   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:37.894861   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0923 10:22:37.894922   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:37.910904   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.037420   11967 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0923 10:22:38.132707   11967 addons.go:234] Setting addon gcp-auth=true in "addons-445250"
	I0923 10:22:38.132762   11967 host.go:66] Checking if "addons-445250" exists ...
	I0923 10:22:38.133409   11967 cli_runner.go:164] Run: docker container inspect addons-445250 --format={{.State.Status}}
	I0923 10:22:38.167105   11967 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0923 10:22:38.167160   11967 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-445250
	I0923 10:22:38.184042   11967 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32768 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/addons-445250/id_rsa Username:docker}
	I0923 10:22:38.329969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.348850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.349827   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:38.798829   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:38.847385   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:38.847868   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.052033   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:39.298764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.347022   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.347523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:39.828069   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:39.847719   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:39.848042   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.039490   11967 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.627570811s)
	I0923 10:22:40.039578   11967 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (1.872440211s)
	I0923 10:22:40.042091   11967 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.3
	I0923 10:22:40.043723   11967 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0923 10:22:40.045225   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0923 10:22:40.045257   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0923 10:22:40.064228   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0923 10:22:40.064253   11967 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0923 10:22:40.082222   11967 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.082246   11967 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0923 10:22:40.136907   11967 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.1/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0923 10:22:40.329111   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.347816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.348314   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:40.743497   11967 addons.go:475] Verifying addon gcp-auth=true in "addons-445250"
	I0923 10:22:40.745653   11967 out.go:177] * Verifying gcp-auth addon...
	I0923 10:22:40.747983   11967 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0923 10:22:40.750552   11967 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0923 10:22:40.750569   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:40.851357   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:40.851626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:40.852043   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.052135   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:41.252005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.298689   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.347279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.347602   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:41.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:41.798093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:41.846555   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:41.847144   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.250929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.298461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.346929   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.347234   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:42.750863   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:42.798245   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:42.846557   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:42.847000   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.251127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.355618   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.356062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.356298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:43.552536   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:43.751077   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:43.798734   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:43.847103   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:43.847581   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.251644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.298309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.346771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.347034   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:44.750721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:44.798594   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:44.847101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:44.847535   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.299044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.347547   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.348189   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:45.750705   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:45.798232   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:45.846841   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:45.847120   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.052259   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:46.250899   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.298397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.347430   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:46.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:46.798438   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:46.846533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:46.846987   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.250492   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.298879   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.347127   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.347819   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:47.751310   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:47.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:47.847226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:47.847773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.251448   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.298949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.347317   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:48.551694   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:48.752137   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:48.798472   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:48.846972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:48.847400   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.251341   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.298972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.347471   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.347951   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:49.750703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:49.799078   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:49.847429   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:49.847812   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.251408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.298942   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.347421   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.347893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:50.552081   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:50.750626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:50.798173   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:50.847276   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:50.848032   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.251477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.298961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.347458   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.347867   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:51.750664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:51.798274   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:51.846535   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:51.847185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.250749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.298515   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.346957   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.347409   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:52.552403   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:52.751135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:52.798654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:52.847072   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:52.847476   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.251020   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.298711   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.347029   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.347626   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:53.751732   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:53.798241   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:53.846461   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:53.846962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.250842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.298271   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.346627   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.346949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:54.750779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:54.798482   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:54.846682   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:54.847175   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.052413   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:55.251076   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.298677   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.347003   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.347743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:55.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:55.798538   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:55.847067   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:55.847484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.251150   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.298943   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.347453   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.347896   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:56.751095   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:56.798596   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:56.846745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:56.847179   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.250839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.298505   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.347074   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.347506   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:57.551579   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:57.751148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:57.798529   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:57.846924   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:57.847369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.251170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.298665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.347156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.347556   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:58.751622   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:58.798291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:58.846556   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:58.847159   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.251703   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.298260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.346762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:22:59.552250   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:22:59.750656   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:22:59.798196   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:22:59.846497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:22:59.846841   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.251199   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.298537   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.347146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.347462   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:00.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:00.798720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:00.846991   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:00.847390   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.251092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.298651   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.346885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.347266   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:01.552532   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:01.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:01.798797   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:01.847134   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:01.847636   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.251414   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.299046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.346776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.346976   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:02.751210   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:02.798665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:02.847041   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:02.847588   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.298846   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.347477   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.347937   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:03.751580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:03.797877   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:03.847450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:03.847870   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.052155   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:04.250974   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.298589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.346910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:04.751530   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:04.799567   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:04.846770   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:04.847184   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.251007   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.298445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.347135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.347527   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:05.751388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:05.799093   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:05.847646   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:05.848031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.052367   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:06.250840   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.298387   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.346761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.347238   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:06.751720   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:06.798318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:06.846779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:06.847219   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.251318   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.298911   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.347408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.347769   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:07.751469   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:07.798992   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:07.847606   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:07.847853   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.251235   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.298906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.347450   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.348057   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:08.552198   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:08.750869   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:08.798365   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:08.846408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:08.846765   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.251760   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.298434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.346956   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.347369   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:09.750692   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:09.798046   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:09.847526   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:09.848062   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.250707   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.298206   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.346577   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.346962   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:10.552617   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:10.750936   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:10.798405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:10.846773   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:10.847081   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.250576   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.298011   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.347813   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:11.750864   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:11.798382   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:11.846687   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:11.847174   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.250954   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.298486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.346963   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.347499   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:12.552672   11967 node_ready.go:53] node "addons-445250" has status "Ready":"False"
	I0923 10:23:12.751565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:12.798263   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:12.846609   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:12.847288   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.250649   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.298224   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.346581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.347009   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:13.750948   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:13.798498   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:13.846756   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:13.847196   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.250875   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.298430   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.346812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:14.347181   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.757534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:14.831755   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:14.863298   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:14.863307   11967 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0923 10:23:14.863337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.054652   11967 node_ready.go:49] node "addons-445250" has status "Ready":"True"
	I0923 10:23:15.054684   11967 node_ready.go:38] duration metric: took 43.005633575s for node "addons-445250" to be "Ready" ...
	I0923 10:23:15.054698   11967 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:23:15.138931   11967 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:15.251452   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.301612   11967 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0923 10:23:15.301637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.427434   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.428141   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:15.753427   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:15.855252   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:15.855518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:15.855536   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.143869   11967 pod_ready.go:93] pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.143890   11967 pod_ready.go:82] duration metric: took 1.004925199s for pod "coredns-7c65d6cfc9-fx58w" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.143908   11967 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147771   11967 pod_ready.go:93] pod "etcd-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.147797   11967 pod_ready.go:82] duration metric: took 3.880973ms for pod "etcd-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.147813   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151340   11967 pod_ready.go:93] pod "kube-apiserver-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.151360   11967 pod_ready.go:82] duration metric: took 3.538721ms for pod "kube-apiserver-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.151379   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154908   11967 pod_ready.go:93] pod "kube-controller-manager-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.154925   11967 pod_ready.go:82] duration metric: took 3.540171ms for pod "kube-controller-manager-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.154937   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.251122   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.252541   11967 pod_ready.go:93] pod "kube-proxy-wkmtk" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.252560   11967 pod_ready.go:82] duration metric: took 97.616289ms for pod "kube-proxy-wkmtk" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.252569   11967 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.298885   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.346935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.347232   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:16.652929   11967 pod_ready.go:93] pod "kube-scheduler-addons-445250" in "kube-system" namespace has status "Ready":"True"
	I0923 10:23:16.652958   11967 pod_ready.go:82] duration metric: took 400.380255ms for pod "kube-scheduler-addons-445250" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.652971   11967 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:23:16.751305   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:16.799551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:16.847949   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:16.848185   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.251997   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.299328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.347771   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.348037   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:17.751574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:17.799610   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:17.848015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:17.848650   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.250930   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.299312   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.347764   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.348433   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:18.659062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:18.752418   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:18.799730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:18.847222   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:18.847395   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.251503   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.299737   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.347618   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:19.751323   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:19.799536   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:19.847668   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:19.847798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.251502   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.299967   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.347574   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.347946   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:20.752027   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:20.799582   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:20.847709   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:20.848055   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.159098   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:21.252132   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.299964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:21.346779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.347020   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.755745   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:21.858762   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:21.859520   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:21.860194   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.251409   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.300044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:22.346882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.347235   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.751664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:22.852777   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:22.853039   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:22.853226   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.251068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.299520   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:23.347578   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.347931   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.658413   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:23.751068   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:23.851935   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:23.852480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:23.852589   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.251460   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.299593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.347663   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.348012   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:24.752139   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:24.829769   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:24.848533   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:24.848714   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.250787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.299026   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:25.347280   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.347450   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.751684   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:25.852481   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:25.852917   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:25.853012   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.158564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:26.251164   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.299637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.346953   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.347393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:26.751177   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:26.800249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:26.900081   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:26.900480   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.251580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.299779   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:27.352497   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.353041   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.751475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:27.853114   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:27.853317   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:27.853731   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.158745   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:28.251214   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.298730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.347152   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.347334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:28.751028   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:28.851629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:28.852256   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:28.852278   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.251249   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.299788   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.347140   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.347661   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:29.752405   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:29.800154   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:29.846882   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:29.847331   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.251406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.300215   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.347056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.347641   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:30.658131   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:30.751454   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:30.800486   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:30.847123   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:30.847735   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.252032   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.300096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.352870   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.353371   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:31.751766   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:31.804056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:31.847133   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:31.847758   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.251744   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.299223   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.347388   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.347653   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:32.751592   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:32.799179   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:32.847018   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:32.847414   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.159363   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:33.251561   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.298927   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.347523   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.347589   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:33.751641   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:33.798959   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:33.847153   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:33.847494   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.251511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.329090   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.346926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.347136   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:34.751327   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:34.852511   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:34.853626   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:34.853680   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.252182   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.299378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.347693   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.348033   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:35.658931   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:35.752279   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:35.800092   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:35.852945   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:35.853579   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.251424   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.300230   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.400564   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.400859   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:36.750934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:36.799444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:36.848021   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:36.848290   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.251588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.300049   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.352506   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.352773   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:37.750964   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:37.799890   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:37.852354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:37.852606   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.158705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:38.251162   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.299581   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.348004   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.348356   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:38.751115   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:38.799410   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:38.848082   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:38.848190   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.251584   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.347098   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.347599   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:39.751816   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:39.799402   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:39.847842   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:39.848902   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.251286   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.299565   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.347972   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.348284   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:40.659317   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:40.751324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:40.829307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:40.847208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:40.847905   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.251435   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.328874   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.347851   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.348180   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:41.751765   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:41.799242   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:41.847826   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:41.848244   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.252250   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.299996   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.348350   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.348560   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:42.751101   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:42.828445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:42.847541   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:42.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.158526   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:43.251605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.331532   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.347791   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.348403   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:43.751856   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:43.848295   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:43.850906   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:43.850993   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.251344   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.299676   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.348302   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.348644   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:44.751195   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:44.799580   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:44.847459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:44.847814   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.251761   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.299534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.347837   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:45.348259   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.658244   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:45.751109   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:45.799378   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:45.847844   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:45.848484   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.250862   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.299309   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.347588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.347881   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:46.752096   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:46.830819   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:46.849324   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:46.849449   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.251489   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.329696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.352328   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.352675   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:47.659565   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:47.751839   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:47.799363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:47.847629   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:47.848160   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.251322   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.299644   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.348437   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.349011   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:48.751971   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:48.799208   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:48.847311   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:48.847677   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.252345   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.353336   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.354113   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:49.354290   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.751696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:49.798850   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:49.847044   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:49.847294   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.159412   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:50.251315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.300359   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:50.347407   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.347852   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.752148   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:50.853086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:50.853794   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:50.853937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.251808   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.299349   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.347605   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.347803   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:51.770445   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:51.800059   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:51.847168   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:51.847523   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.252094   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.299496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.347921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.348258   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:52.658499   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:52.751149   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:52.799391   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:52.847917   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:52.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.251084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.299459   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.348165   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.349134   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:53.751749   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:53.799085   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:53.847146   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:53.847698   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.251525   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.299814   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.347087   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.347485   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:54.658586   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:54.751738   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:54.798920   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:54.847135   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:54.847497   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.251534   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.299916   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.347315   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.347570   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:55.751726   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:55.799056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:55.847243   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:55.847517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.250904   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.329860   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.347928   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.348157   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:56.659564   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:56.751715   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:56.798895   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:56.848713   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:56.849087   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.327397   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.330171   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.347623   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.349031   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:57.752514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:57.831070   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:57.849260   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:57.929421   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.251507   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.329086   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.348239   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.349299   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:58.659625   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:23:58.751107   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:58.828912   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:58.848131   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:58.848674   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.251980   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.329647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.347593   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.348472   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:23:59.751659   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:23:59.799518   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:23:59.847937   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:23:59.848242   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.251754   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.299551   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.348376   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:00.348776   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.751910   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:00.799228   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:00.847852   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:00.848386   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.159636   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:01.251545   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.300654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.347444   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.347798   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:01.751291   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:01.799969   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:01.847062   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:01.847151   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.250921   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.299432   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.347411   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.347701   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:02.751456   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:02.799637   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:02.846847   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:02.847345   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.251056   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.299408   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.349455   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:03.349496   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.658044   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:03.751632   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:03.800475   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:03.847815   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:03.848013   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.251337   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.300084   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.347301   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.347740   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:04.751934   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:04.828237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:04.847307   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:04.847974   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.252170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.328300   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:05.347284   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.347600   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.658958   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:05.752071   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:05.853170   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:05.853743   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:05.853986   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.252000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.328730   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.347793   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:06.348249   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.751665   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:06.828961   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:06.849117   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:06.849787   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.251616   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.300647   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:07.347543   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.348597   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.771812   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:07.876237   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:07.876559   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:07.877562   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.159319   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:08.251653   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.299807   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.348721   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.348927   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:08.752006   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:08.799289   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:08.847514   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0923 10:24:08.847770   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.251104   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.299398   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.347380   11967 kapi.go:107] duration metric: took 1m33.003646242s to wait for kubernetes.io/minikube-addons=registry ...
	I0923 10:24:09.347767   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:09.751748   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:09.800319   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:09.847334   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.251156   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.299664   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.348059   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:10.658634   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:10.750897   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:10.799121   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:10.847887   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.251008   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.299420   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.348466   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:11.750925   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:11.799015   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:11.847967   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.251825   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.299403   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.347748   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:12.751282   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:12.800000   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:12.847468   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.159267   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:13.251700   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.299065   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.347829   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:13.752005   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0923 10:24:13.799406   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:13.853893   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.253633   11967 kapi.go:107] duration metric: took 1m33.505659378s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0923 10:24:14.257404   11967 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-445250 cluster.
	I0923 10:24:14.258882   11967 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0923 10:24:14.260323   11967 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0923 10:24:14.299938   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.348717   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:14.799563   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:14.847849   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.329354   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.347994   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:15.658992   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:15.799926   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:15.847969   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.299363   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.348302   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:16.799654   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:16.848799   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.299696   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.348435   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:17.659051   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:17.799970   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:17.848268   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.300125   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.400393   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:18.799588   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:18.848195   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.300200   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.348989   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:19.799189   11967 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0923 10:24:19.847633   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.166062   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:20.330843   11967 kapi.go:107] duration metric: took 1m43.035557511s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0923 10:24:20.348554   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:20.848824   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.348354   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:21.848082   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.348802   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:22.659419   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:22.847751   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.348517   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:23.848949   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.347848   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:24.848694   11967 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0923 10:24:25.158725   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:25.348583   11967 kapi.go:107] duration metric: took 1m49.004639978s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0923 10:24:25.350870   11967 out.go:177] * Enabled addons: nvidia-device-plugin, default-storageclass, ingress-dns, storage-provisioner-rancher, storage-provisioner, cloud-spanner, metrics-server, yakd, inspektor-gadget, volumesnapshots, registry, gcp-auth, csi-hostpath-driver, ingress
	I0923 10:24:25.353079   11967 addons.go:510] duration metric: took 1m54.712359706s for enable addons: enabled=[nvidia-device-plugin default-storageclass ingress-dns storage-provisioner-rancher storage-provisioner cloud-spanner metrics-server yakd inspektor-gadget volumesnapshots registry gcp-auth csi-hostpath-driver ingress]
	I0923 10:24:27.658306   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:29.658584   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:32.158410   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:34.657759   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:36.658121   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:38.658705   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:40.659320   11967 pod_ready.go:103] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"False"
	I0923 10:24:41.658677   11967 pod_ready.go:93] pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.658710   11967 pod_ready.go:82] duration metric: took 1m25.005729374s for pod "metrics-server-84c5f94fbc-7csnr" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.658725   11967 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663462   11967 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace has status "Ready":"True"
	I0923 10:24:41.663484   11967 pod_ready.go:82] duration metric: took 4.751466ms for pod "nvidia-device-plugin-daemonset-649c2" in "kube-system" namespace to be "Ready" ...
	I0923 10:24:41.663503   11967 pod_ready.go:39] duration metric: took 1m26.60878964s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0923 10:24:41.663521   11967 api_server.go:52] waiting for apiserver process to appear ...
	I0923 10:24:41.663567   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:41.663611   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:41.696491   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:41.696517   11967 cri.go:89] found id: ""
	I0923 10:24:41.696526   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:41.696575   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.699787   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:41.699845   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:41.732611   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:41.732632   11967 cri.go:89] found id: ""
	I0923 10:24:41.732641   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:41.732680   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.736045   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:41.736113   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:41.768329   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:41.768360   11967 cri.go:89] found id: ""
	I0923 10:24:41.768370   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:41.768426   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.771643   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:41.771702   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:41.805603   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:41.805627   11967 cri.go:89] found id: ""
	I0923 10:24:41.805637   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:41.805686   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.808896   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:41.808968   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:41.843211   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:41.843234   11967 cri.go:89] found id: ""
	I0923 10:24:41.843242   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:41.843293   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.846569   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:41.846631   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:41.878951   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:41.878969   11967 cri.go:89] found id: ""
	I0923 10:24:41.878977   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:41.879015   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.882160   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:41.882216   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:41.913249   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:41.913273   11967 cri.go:89] found id: ""
	I0923 10:24:41.913281   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:41.913337   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:41.916358   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:41.916384   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:41.962291   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962472   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:41.962607   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:41.962764   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.000201   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:42.000236   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:42.033282   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:42.033307   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:42.074054   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:42.074089   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:42.107707   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:42.107734   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:42.144872   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:42.144926   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:42.199993   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:42.200024   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:42.234245   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:42.234274   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:42.246004   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:42.246038   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:42.353925   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:42.353954   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:42.444039   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:42.444069   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:42.488688   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:42.488720   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:42.565082   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565110   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:42.565165   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:42.565173   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565180   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:42.565191   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:42.565197   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:42.565201   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:42.565206   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:24:52.566001   11967 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:24:52.579983   11967 api_server.go:72] duration metric: took 2m21.939291421s to wait for apiserver process to appear ...
	I0923 10:24:52.580014   11967 api_server.go:88] waiting for apiserver healthz status ...
	I0923 10:24:52.580048   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:24:52.580103   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:24:52.613694   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.613720   11967 cri.go:89] found id: ""
	I0923 10:24:52.613729   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:24:52.613775   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.617041   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:24:52.617099   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:24:52.649762   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:52.649781   11967 cri.go:89] found id: ""
	I0923 10:24:52.649788   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:24:52.649852   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.653130   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:24:52.653186   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:24:52.685749   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:52.685769   11967 cri.go:89] found id: ""
	I0923 10:24:52.685775   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:24:52.685813   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.688875   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:24:52.688931   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:24:52.721693   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:52.721716   11967 cri.go:89] found id: ""
	I0923 10:24:52.721723   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:24:52.721772   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.725081   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:24:52.725136   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:24:52.759437   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:52.759464   11967 cri.go:89] found id: ""
	I0923 10:24:52.759474   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:24:52.759530   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.762872   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:24:52.762937   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:24:52.797876   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:52.797893   11967 cri.go:89] found id: ""
	I0923 10:24:52.797900   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:24:52.797940   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.801151   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:24:52.801201   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:24:52.833315   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:52.833339   11967 cri.go:89] found id: ""
	I0923 10:24:52.833346   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:24:52.833387   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:24:52.836655   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:24:52.836681   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:24:52.927959   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:24:52.927988   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:24:52.970219   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:24:52.970246   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:24:53.005352   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:24:53.005388   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:24:53.043256   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:24:53.043284   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:24:53.097302   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:24:53.097340   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:24:53.173928   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:24:53.173959   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:24:53.214820   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:24:53.214848   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:24:53.226459   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:24:53.226486   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:24:53.269173   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:24:53.269204   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:24:53.302182   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:24:53.302257   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:24:53.338936   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:24:53.338965   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:24:53.384315   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384503   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.384632   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.384787   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422192   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422221   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:24:53.422272   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:24:53.422279   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422286   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:24:53.422294   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:24:53.422303   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:24:53.422308   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:24:53.422314   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:03.423825   11967 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I0923 10:25:03.428133   11967 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I0923 10:25:03.428969   11967 api_server.go:141] control plane version: v1.31.1
	I0923 10:25:03.428992   11967 api_server.go:131] duration metric: took 10.848971435s to wait for apiserver health ...
	I0923 10:25:03.429000   11967 system_pods.go:43] waiting for kube-system pods to appear ...
	I0923 10:25:03.429020   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0923 10:25:03.429067   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0923 10:25:03.463555   11967 cri.go:89] found id: "8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:03.463573   11967 cri.go:89] found id: ""
	I0923 10:25:03.463582   11967 logs.go:276] 1 containers: [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1]
	I0923 10:25:03.463622   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.466867   11967 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0923 10:25:03.466923   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0923 10:25:03.498838   11967 cri.go:89] found id: "5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:03.498862   11967 cri.go:89] found id: ""
	I0923 10:25:03.498870   11967 logs.go:276] 1 containers: [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478]
	I0923 10:25:03.498916   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.502169   11967 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0923 10:25:03.502224   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0923 10:25:03.535181   11967 cri.go:89] found id: "1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:03.535202   11967 cri.go:89] found id: ""
	I0923 10:25:03.535211   11967 logs.go:276] 1 containers: [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2]
	I0923 10:25:03.535260   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.538506   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0923 10:25:03.538568   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0923 10:25:03.571929   11967 cri.go:89] found id: "5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:03.571954   11967 cri.go:89] found id: ""
	I0923 10:25:03.571963   11967 logs.go:276] 1 containers: [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9]
	I0923 10:25:03.572007   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.575352   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0923 10:25:03.575421   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0923 10:25:03.608263   11967 cri.go:89] found id: "60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:03.608286   11967 cri.go:89] found id: ""
	I0923 10:25:03.608296   11967 logs.go:276] 1 containers: [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41]
	I0923 10:25:03.608353   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.611725   11967 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0923 10:25:03.611781   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0923 10:25:03.643940   11967 cri.go:89] found id: "3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.643974   11967 cri.go:89] found id: ""
	I0923 10:25:03.643985   11967 logs.go:276] 1 containers: [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86]
	I0923 10:25:03.644031   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.647205   11967 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0923 10:25:03.647259   11967 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0923 10:25:03.680120   11967 cri.go:89] found id: "3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:03.680145   11967 cri.go:89] found id: ""
	I0923 10:25:03.680155   11967 logs.go:276] 1 containers: [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147]
	I0923 10:25:03.680197   11967 ssh_runner.go:195] Run: which crictl
	I0923 10:25:03.683474   11967 logs.go:123] Gathering logs for describe nodes ...
	I0923 10:25:03.683500   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.1/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0923 10:25:03.783529   11967 logs.go:123] Gathering logs for kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] ...
	I0923 10:25:03.783558   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86"
	I0923 10:25:03.838870   11967 logs.go:123] Gathering logs for container status ...
	I0923 10:25:03.838909   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0923 10:25:03.879312   11967 logs.go:123] Gathering logs for kubelet ...
	I0923 10:25:03.879343   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W0923 10:25:03.925363   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925562   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:03.925696   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:03.925851   11967 logs.go:138] Found kubelet problem: Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:03.966109   11967 logs.go:123] Gathering logs for dmesg ...
	I0923 10:25:03.966148   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0923 10:25:03.978653   11967 logs.go:123] Gathering logs for coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] ...
	I0923 10:25:03.978691   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2"
	I0923 10:25:04.012260   11967 logs.go:123] Gathering logs for kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] ...
	I0923 10:25:04.012287   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9"
	I0923 10:25:04.049729   11967 logs.go:123] Gathering logs for kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] ...
	I0923 10:25:04.049759   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41"
	I0923 10:25:04.082626   11967 logs.go:123] Gathering logs for kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] ...
	I0923 10:25:04.082662   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147"
	I0923 10:25:04.117339   11967 logs.go:123] Gathering logs for CRI-O ...
	I0923 10:25:04.117364   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0923 10:25:04.188147   11967 logs.go:123] Gathering logs for kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] ...
	I0923 10:25:04.188192   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1"
	I0923 10:25:04.230982   11967 logs.go:123] Gathering logs for etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] ...
	I0923 10:25:04.231014   11967 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478"
	I0923 10:25:04.275512   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275542   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0923 10:25:04.275603   11967 out.go:270] X Problems detected in kubelet:
	W0923 10:25:04.275611   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778748    1645 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275621   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778805    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	W0923 10:25:04.275632   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: W0923 10:22:30.778858    1645 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:addons-445250" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-445250' and this object
	W0923 10:25:04.275639   11967 out.go:270]   Sep 23 10:22:30 addons-445250 kubelet[1645]: E0923 10:22:30.778872    1645 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:addons-445250\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-445250' and this object" logger="UnhandledError"
	I0923 10:25:04.275644   11967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:25:04.275655   11967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:25:14.287581   11967 system_pods.go:59] 18 kube-system pods found
	I0923 10:25:14.287615   11967 system_pods.go:61] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.287621   11967 system_pods.go:61] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.287624   11967 system_pods.go:61] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.287628   11967 system_pods.go:61] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.287631   11967 system_pods.go:61] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.287634   11967 system_pods.go:61] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.287638   11967 system_pods.go:61] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.287641   11967 system_pods.go:61] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.287646   11967 system_pods.go:61] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.287649   11967 system_pods.go:61] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.287652   11967 system_pods.go:61] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.287656   11967 system_pods.go:61] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.287661   11967 system_pods.go:61] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.287666   11967 system_pods.go:61] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.287672   11967 system_pods.go:61] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.287675   11967 system_pods.go:61] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.287681   11967 system_pods.go:61] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.287685   11967 system_pods.go:61] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.287693   11967 system_pods.go:74] duration metric: took 10.858688236s to wait for pod list to return data ...
	I0923 10:25:14.287702   11967 default_sa.go:34] waiting for default service account to be created ...
	I0923 10:25:14.289991   11967 default_sa.go:45] found service account: "default"
	I0923 10:25:14.290010   11967 default_sa.go:55] duration metric: took 2.299912ms for default service account to be created ...
	I0923 10:25:14.290018   11967 system_pods.go:116] waiting for k8s-apps to be running ...
	I0923 10:25:14.298150   11967 system_pods.go:86] 18 kube-system pods found
	I0923 10:25:14.298176   11967 system_pods.go:89] "coredns-7c65d6cfc9-fx58w" [76135cab-71d6-4fbc-9730-7e157e19b3d1] Running
	I0923 10:25:14.298181   11967 system_pods.go:89] "csi-hostpath-attacher-0" [c14ba032-645c-477a-8576-55cfd6df0d60] Running
	I0923 10:25:14.298185   11967 system_pods.go:89] "csi-hostpath-resizer-0" [9c153fb6-cf96-4170-aba0-81da3c93da24] Running
	I0923 10:25:14.298188   11967 system_pods.go:89] "csi-hostpathplugin-jb7xc" [e6337313-aeb5-44b2-9ac3-0ad53d08846e] Running
	I0923 10:25:14.298192   11967 system_pods.go:89] "etcd-addons-445250" [3f591ed3-ef76-488a-8099-62df99f1aad4] Running
	I0923 10:25:14.298196   11967 system_pods.go:89] "kindnet-dzbp5" [add1ea93-1e0d-43a8-bef7-651410611beb] Running
	I0923 10:25:14.298200   11967 system_pods.go:89] "kube-apiserver-addons-445250" [dc91b9f8-0364-49b3-9a53-60f0bcda9e0f] Running
	I0923 10:25:14.298205   11967 system_pods.go:89] "kube-controller-manager-addons-445250" [cf367f20-e011-4533-85f2-3353fc3d0730] Running
	I0923 10:25:14.298208   11967 system_pods.go:89] "kube-ingress-dns-minikube" [2eb91201-ae53-4248-b0dc-bc754dc7f77c] Running
	I0923 10:25:14.298212   11967 system_pods.go:89] "kube-proxy-wkmtk" [fbf3d292-a3ed-4397-bfb9-c32ebca66f2a] Running
	I0923 10:25:14.298218   11967 system_pods.go:89] "kube-scheduler-addons-445250" [a53aad31-25c2-4939-a256-7dedca01ddd7] Running
	I0923 10:25:14.298222   11967 system_pods.go:89] "metrics-server-84c5f94fbc-7csnr" [de3ce7e3-ca3b-4719-baa0-60b0964a15e6] Running
	I0923 10:25:14.298227   11967 system_pods.go:89] "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
	I0923 10:25:14.298230   11967 system_pods.go:89] "registry-66c9cd494c-nrpsw" [40d0085a-ea70-4052-ad07-a26bb7092539] Running
	I0923 10:25:14.298236   11967 system_pods.go:89] "registry-proxy-gnlc5" [d7382df4-3be8-48d0-9dcb-8cb5cc78647c] Running
	I0923 10:25:14.298239   11967 system_pods.go:89] "snapshot-controller-56fcc65765-dlmwp" [fd57301d-090a-49ee-a7a9-64fe81f0524a] Running
	I0923 10:25:14.298244   11967 system_pods.go:89] "snapshot-controller-56fcc65765-gvjzd" [8a3bfbc9-c59d-4af0-9e6d-c7823fa7b098] Running
	I0923 10:25:14.298247   11967 system_pods.go:89] "storage-provisioner" [b95afb17-c57c-4bcb-9763-8c43faa5ee12] Running
	I0923 10:25:14.298253   11967 system_pods.go:126] duration metric: took 8.230518ms to wait for k8s-apps to be running ...
	I0923 10:25:14.298262   11967 system_svc.go:44] waiting for kubelet service to be running ....
	I0923 10:25:14.298303   11967 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:25:14.309069   11967 system_svc.go:56] duration metric: took 10.799947ms WaitForService to wait for kubelet
	I0923 10:25:14.309093   11967 kubeadm.go:582] duration metric: took 2m43.668407459s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0923 10:25:14.309111   11967 node_conditions.go:102] verifying NodePressure condition ...
	I0923 10:25:14.312018   11967 node_conditions.go:122] node storage ephemeral capacity is 304681132Ki
	I0923 10:25:14.312045   11967 node_conditions.go:123] node cpu capacity is 8
	I0923 10:25:14.312058   11967 node_conditions.go:105] duration metric: took 2.941824ms to run NodePressure ...
	I0923 10:25:14.312068   11967 start.go:241] waiting for startup goroutines ...
	I0923 10:25:14.312077   11967 start.go:246] waiting for cluster config update ...
	I0923 10:25:14.312094   11967 start.go:255] writing updated cluster config ...
	I0923 10:25:14.312343   11967 ssh_runner.go:195] Run: rm -f paused
	I0923 10:25:14.359947   11967 start.go:600] kubectl: 1.31.1, cluster: 1.31.1 (minor skew: 0)
	I0923 10:25:14.362510   11967 out.go:177] * Done! kubectl is now configured to use "addons-445250" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Sep 23 10:36:25 addons-445250 crio[1027]: time="2024-09-23 10:36:25.861747601Z" level=info msg="Removing pod sandbox: fe27de295d179e4755232506a6e1869c94086988b61a9acdfc0c72b0a0cea554" id=dc1e56c5-8259-4fe3-a6c1-d52bde375ea3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 10:36:25 addons-445250 crio[1027]: time="2024-09-23 10:36:25.868428733Z" level=info msg="Removed pod sandbox: fe27de295d179e4755232506a6e1869c94086988b61a9acdfc0c72b0a0cea554" id=dc1e56c5-8259-4fe3-a6c1-d52bde375ea3 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Sep 23 10:36:33 addons-445250 crio[1027]: time="2024-09-23 10:36:33.542258790Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=0f8cc529-31de-4fbd-95a0-6668fb302073 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:33 addons-445250 crio[1027]: time="2024-09-23 10:36:33.542587521Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=0f8cc529-31de-4fbd-95a0-6668fb302073 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:48 addons-445250 crio[1027]: time="2024-09-23 10:36:48.541921931Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9c19e41d-f7b2-4089-be20-f9689347e9e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:36:48 addons-445250 crio[1027]: time="2024-09-23 10:36:48.542193829Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9c19e41d-f7b2-4089-be20-f9689347e9e9 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:02 addons-445250 crio[1027]: time="2024-09-23 10:37:02.542059631Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=82d9d12b-d814-4923-ad37-36ee7674d531 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:02 addons-445250 crio[1027]: time="2024-09-23 10:37:02.542290075Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=82d9d12b-d814-4923-ad37-36ee7674d531 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:14 addons-445250 crio[1027]: time="2024-09-23 10:37:14.542659628Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=abb536e5-309e-401b-a975-8a321d71a5bb name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:14 addons-445250 crio[1027]: time="2024-09-23 10:37:14.542920346Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=abb536e5-309e-401b-a975-8a321d71a5bb name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:25 addons-445250 crio[1027]: time="2024-09-23 10:37:25.542051293Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=d8c8f02b-f1df-4ab2-9145-4232d2337618 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:25 addons-445250 crio[1027]: time="2024-09-23 10:37:25.542316850Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=d8c8f02b-f1df-4ab2-9145-4232d2337618 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:40 addons-445250 crio[1027]: time="2024-09-23 10:37:40.541785054Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a2484266-8b47-4ebc-84f8-ecc7db2d2669 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:40 addons-445250 crio[1027]: time="2024-09-23 10:37:40.542000208Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a2484266-8b47-4ebc-84f8-ecc7db2d2669 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:55 addons-445250 crio[1027]: time="2024-09-23 10:37:55.542713794Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=9e10a232-37d1-4e67-bb78-1f416fc22050 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:37:55 addons-445250 crio[1027]: time="2024-09-23 10:37:55.542991700Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=9e10a232-37d1-4e67-bb78-1f416fc22050 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:09 addons-445250 crio[1027]: time="2024-09-23 10:38:09.541849618Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=813a1b44-a18d-4f14-b23d-3e0a9a10eb8b name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:09 addons-445250 crio[1027]: time="2024-09-23 10:38:09.542084390Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=813a1b44-a18d-4f14-b23d-3e0a9a10eb8b name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:23 addons-445250 crio[1027]: time="2024-09-23 10:38:23.542054397Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=04938e53-380d-4d8e-babf-bfcec1d5aa0c name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:23 addons-445250 crio[1027]: time="2024-09-23 10:38:23.542318267Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=04938e53-380d-4d8e-babf-bfcec1d5aa0c name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:35 addons-445250 crio[1027]: time="2024-09-23 10:38:35.541818979Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=a6cffe6b-d8fc-496e-8eea-50a1a3431502 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:35 addons-445250 crio[1027]: time="2024-09-23 10:38:35.542108437Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=a6cffe6b-d8fc-496e-8eea-50a1a3431502 name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:50 addons-445250 crio[1027]: time="2024-09-23 10:38:50.542622971Z" level=info msg="Checking image status: gcr.io/k8s-minikube/busybox:1.28.4-glibc" id=c208bc1d-ebae-4a5b-b76e-f92c3d9ba34d name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:50 addons-445250 crio[1027]: time="2024-09-23 10:38:50.542883200Z" level=info msg="Image gcr.io/k8s-minikube/busybox:1.28.4-glibc not found" id=c208bc1d-ebae-4a5b-b76e-f92c3d9ba34d name=/runtime.v1.ImageService/ImageStatus
	Sep 23 10:38:55 addons-445250 crio[1027]: time="2024-09-23 10:38:55.772931448Z" level=info msg="Stopping container: 26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36 (timeout: 30s)" id=7a23db2c-0450-4531-81d8-5b3ade1df373 name=/runtime.v1.RuntimeService/StopContainer
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ddf43cced473f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   cacffc01918ea       hello-world-app-55bf9c44b4-cz95t
	9b9d147b1d7d7       docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56                         5 minutes ago       Running             nginx                     0                   c86cb59ddb3ca       nginx
	595e24a79c3cc       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b            14 minutes ago      Running             gcp-auth                  0                   269c70f2ed966       gcp-auth-89d5ffd79-wh69l
	26fbe31bfc2e3       registry.k8s.io/metrics-server/metrics-server@sha256:78e46b57096ec75e302fbc853e36359555df5c827bb009ecfe66f97474cc2a5a   15 minutes ago      Running             metrics-server            0                   060b6c8c02d4c       metrics-server-84c5f94fbc-7csnr
	1ebaed16470de       c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6                                                        15 minutes ago      Running             coredns                   0                   8b47c72a2e89f       coredns-7c65d6cfc9-fx58w
	66c2617c6cdee       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                        15 minutes ago      Running             storage-provisioner       0                   ca64b60aaf77d       storage-provisioner
	60d69acfd0786       60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561                                                        16 minutes ago      Running             kube-proxy                0                   8b3d1fd790d7d       kube-proxy-wkmtk
	3fc705a9a7747       12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f                                                        16 minutes ago      Running             kindnet-cni               0                   16dd7a97e2486       kindnet-dzbp5
	5a7d4dfeab76c       2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4                                                        16 minutes ago      Running             etcd                      0                   d78357fa957f5       etcd-addons-445250
	3fc6d875aa953       175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1                                                        16 minutes ago      Running             kube-controller-manager   0                   b238baa295476       kube-controller-manager-addons-445250
	5e1692605ef5b       9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b                                                        16 minutes ago      Running             kube-scheduler            0                   1912f3295ca7d       kube-scheduler-addons-445250
	8b87d8d2ee711       6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee                                                        16 minutes ago      Running             kube-apiserver            0                   f275d2a0ce43d       kube-apiserver-addons-445250
	
	
	==> coredns [1ebaed16470defa1f68e7a2e10337433205f78000fdf80494bbb81499b4a6eb2] <==
	[INFO] 10.244.0.17:51021 - 2201 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000091016s
	[INFO] 10.244.0.17:48133 - 44271 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000049343s
	[INFO] 10.244.0.17:48133 - 55785 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000088427s
	[INFO] 10.244.0.17:49831 - 11625 "AAAA IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.004641559s
	[INFO] 10.244.0.17:49831 - 53357 "A IN registry.kube-system.svc.cluster.local.europe-west1-b.c.k8s-minikube.internal. udp 95 false 512" NXDOMAIN qr,rd,ra 95 0.008874643s
	[INFO] 10.244.0.17:47951 - 29897 "AAAA IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.004484598s
	[INFO] 10.244.0.17:47951 - 12748 "A IN registry.kube-system.svc.cluster.local.c.k8s-minikube.internal. udp 80 false 512" NXDOMAIN qr,rd,ra 80 0.01442901s
	[INFO] 10.244.0.17:48028 - 15319 "A IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004123886s
	[INFO] 10.244.0.17:48028 - 211 "AAAA IN registry.kube-system.svc.cluster.local.google.internal. udp 72 false 512" NXDOMAIN qr,rd,ra 72 0.004165972s
	[INFO] 10.244.0.17:47195 - 44952 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000070798s
	[INFO] 10.244.0.17:47195 - 64917 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000141683s
	[INFO] 10.244.0.19:37440 - 47006 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000160757s
	[INFO] 10.244.0.19:51770 - 28058 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000235131s
	[INFO] 10.244.0.19:37999 - 57631 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000117212s
	[INFO] 10.244.0.19:60851 - 28099 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000164334s
	[INFO] 10.244.0.19:60473 - 52842 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000127623s
	[INFO] 10.244.0.19:60093 - 46732 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000183998s
	[INFO] 10.244.0.19:59180 - 21854 "A IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.005303324s
	[INFO] 10.244.0.19:53723 - 13226 "AAAA IN storage.googleapis.com.europe-west1-b.c.k8s-minikube.internal. udp 90 false 1232" NXDOMAIN qr,rd,ra 79 0.006472921s
	[INFO] 10.244.0.19:57517 - 53934 "A IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.004844258s
	[INFO] 10.244.0.19:37603 - 62628 "AAAA IN storage.googleapis.com.c.k8s-minikube.internal. udp 75 false 1232" NXDOMAIN qr,rd,ra 64 0.007796574s
	[INFO] 10.244.0.19:52499 - 62644 "AAAA IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.004780066s
	[INFO] 10.244.0.19:43363 - 37803 "A IN storage.googleapis.com.google.internal. udp 67 false 1232" NXDOMAIN qr,rd,ra 56 0.005512487s
	[INFO] 10.244.0.19:50641 - 54574 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 140 0.000695895s
	[INFO] 10.244.0.19:42118 - 61953 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 116 0.000813877s
	
	
	==> describe nodes <==
	Name:               addons-445250
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-445250
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=f69bf2f8ed9442c9c01edbe27466c5398c68b986
	                    minikube.k8s.io/name=addons-445250
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_09_23T10_22_26_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-445250
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 23 Sep 2024 10:22:23 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-445250
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 23 Sep 2024 10:38:56 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 23 Sep 2024 10:36:31 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 23 Sep 2024 10:36:31 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 23 Sep 2024 10:36:31 +0000   Mon, 23 Sep 2024 10:22:21 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 23 Sep 2024 10:36:31 +0000   Mon, 23 Sep 2024 10:23:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-445250
	Capacity:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	Allocatable:
	  cpu:                8
	  ephemeral-storage:  304681132Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  memory:             32859312Ki
	  pods:               110
	System Info:
	  Machine ID:                 98cd57bf5c0b47f391b0c0e0a30c5e14
	  System UUID:                64a901d1-6ec3-40d1-a503-55d7681a31ba
	  Boot ID:                    7fc2d313-9727-4ab1-967f-13a3c84ada15
	  Kernel Version:             5.15.0-1069-gcp
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.1
	  Kube-Proxy Version:         v1.31.1
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         13m
	  default                     hello-world-app-55bf9c44b4-cz95t         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m53s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m18s
	  gcp-auth                    gcp-auth-89d5ffd79-wh69l                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 coredns-7c65d6cfc9-fx58w                 100m (1%)     0 (0%)      70Mi (0%)        170Mi (0%)     16m
	  kube-system                 etcd-addons-445250                       100m (1%)     0 (0%)      100Mi (0%)       0 (0%)         16m
	  kube-system                 kindnet-dzbp5                            100m (1%)     100m (1%)   50Mi (0%)        50Mi (0%)      16m
	  kube-system                 kube-apiserver-addons-445250             250m (3%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-controller-manager-addons-445250    200m (2%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-proxy-wkmtk                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 kube-scheduler-addons-445250             100m (1%)     0 (0%)      0 (0%)           0 (0%)         16m
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (10%)  100m (1%)
	  memory             220Mi (0%)  220Mi (0%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 16m   kube-proxy       
	  Normal   Starting                 16m   kubelet          Starting kubelet.
	  Warning  CgroupV1                 16m   kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  16m   kubelet          Node addons-445250 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m   kubelet          Node addons-445250 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m   kubelet          Node addons-445250 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m   node-controller  Node addons-445250 event: Registered Node addons-445250 in Controller
	  Normal   NodeReady                15m   kubelet          Node addons-445250 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.003589] platform eisa.0: EISA: Cannot allocate resource for mainboard
	[  +0.001035] platform eisa.0: Cannot allocate resource for EISA slot 1
	[  +0.000753] platform eisa.0: Cannot allocate resource for EISA slot 2
	[  +0.001022] platform eisa.0: Cannot allocate resource for EISA slot 3
	[  +0.000686] platform eisa.0: Cannot allocate resource for EISA slot 4
	[  +0.000710] platform eisa.0: Cannot allocate resource for EISA slot 5
	[  +0.000605] platform eisa.0: Cannot allocate resource for EISA slot 6
	[  +0.000867] platform eisa.0: Cannot allocate resource for EISA slot 7
	[  +0.000747] platform eisa.0: Cannot allocate resource for EISA slot 8
	[  +0.635766] systemd[1]: Configuration file /run/systemd/system/netplan-ovs-cleanup.service is marked world-inaccessible. This has no effect as configuration data is accessible via APIs without restrictions. Proceeding anyway.
	[  +7.213677] kauditd_printk_skb: 46 callbacks suppressed
	[Sep23 10:33] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +1.023987] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000005] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +2.019762] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[Sep23 10:34] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000027] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[  +8.191064] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000007] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[ +16.126232] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	[ +33.276298] IPv4: martian source 10.244.0.20 from 127.0.0.1, on dev eth0
	[  +0.000008] ll header: 00000000: 16 eb c1 e7 c7 82 32 bd d1 20 fc 68 08 00
	
	
	==> etcd [5a7d4dfeab76cd68ce68c389e2b1d85827564883d5cc71b0301977d661e93478] <==
	{"level":"warn","ts":"2024-09-23T10:22:33.131317Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"105.346561ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8128032086776975712 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" mod_revision:0 > success:<request_put:<key:\"/registry/controllerrevisions/kube-system/nvidia-device-plugin-daemonset-76bfdf4db8\" value_size:2820 >> failure:<>>","response":"size:16"}
	{"level":"info","ts":"2024-09-23T10:22:33.131445Z","caller":"traceutil/trace.go:171","msg":"trace[1069954861] linearizableReadLoop","detail":"{readStateIndex:377; appliedIndex:376; }","duration":"181.441071ms","start":"2024-09-23T10:22:32.949991Z","end":"2024-09-23T10:22:33.131432Z","steps":["trace[1069954861] 'read index received'  (duration: 75.782049ms)","trace[1069954861] 'applied index is now lower than readState.Index'  (duration: 105.65779ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.131583Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"402.530772ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/standard\" ","response":"range_response_count:1 size:992"}
	{"level":"info","ts":"2024-09-23T10:22:33.131616Z","caller":"traceutil/trace.go:171","msg":"trace[2088191430] range","detail":"{range_begin:/registry/storageclasses/standard; range_end:; response_count:1; response_revision:366; }","duration":"402.569222ms","start":"2024-09-23T10:22:32.729039Z","end":"2024-09-23T10:22:33.131608Z","steps":["trace[2088191430] 'agreement among raft nodes before linearized reading'  (duration: 402.426676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-09-23T10:22:33.131648Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-09-23T10:22:32.729011Z","time spent":"402.631755ms","remote":"127.0.0.1:49284","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":1016,"request content":"key:\"/registry/storageclasses/standard\" "}
	{"level":"info","ts":"2024-09-23T10:22:33.131969Z","caller":"traceutil/trace.go:171","msg":"trace[485971237] transaction","detail":"{read_only:false; response_revision:366; number_of_response:1; }","duration":"288.992414ms","start":"2024-09-23T10:22:32.842964Z","end":"2024-09-23T10:22:33.131957Z","steps":["trace[485971237] 'process raft request'  (duration: 182.871523ms)","trace[485971237] 'compare'  (duration: 105.121142ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:33.132153Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"289.499934ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/deployments/kube-system/coredns\" ","response":"range_response_count:1 size:4096"}
	{"level":"info","ts":"2024-09-23T10:22:33.132187Z","caller":"traceutil/trace.go:171","msg":"trace[517510634] range","detail":"{range_begin:/registry/deployments/kube-system/coredns; range_end:; response_count:1; response_revision:366; }","duration":"289.537023ms","start":"2024-09-23T10:22:32.842643Z","end":"2024-09-23T10:22:33.132180Z","steps":["trace[517510634] 'agreement among raft nodes before linearized reading'  (duration: 289.463087ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.539907Z","caller":"traceutil/trace.go:171","msg":"trace[2144953017] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"108.868731ms","start":"2024-09-23T10:22:33.431009Z","end":"2024-09-23T10:22:33.539878Z","steps":["trace[2144953017] 'process raft request'  (duration: 104.859929ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.541630Z","caller":"traceutil/trace.go:171","msg":"trace[398091402] transaction","detail":"{read_only:false; response_revision:380; number_of_response:1; }","duration":"104.416004ms","start":"2024-09-23T10:22:33.437193Z","end":"2024-09-23T10:22:33.541609Z","steps":["trace[398091402] 'process raft request'  (duration: 104.009984ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542060Z","caller":"traceutil/trace.go:171","msg":"trace[668743326] transaction","detail":"{read_only:false; response_revision:381; number_of_response:1; }","duration":"104.729221ms","start":"2024-09-23T10:22:33.437317Z","end":"2024-09-23T10:22:33.542046Z","steps":["trace[668743326] 'process raft request'  (duration: 103.952712ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542277Z","caller":"traceutil/trace.go:171","msg":"trace[1672766993] transaction","detail":"{read_only:false; response_revision:382; number_of_response:1; }","duration":"104.838258ms","start":"2024-09-23T10:22:33.437430Z","end":"2024-09-23T10:22:33.542268Z","steps":["trace[1672766993] 'process raft request'  (duration: 103.868629ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.542412Z","caller":"traceutil/trace.go:171","msg":"trace[1767469839] transaction","detail":"{read_only:false; response_revision:383; number_of_response:1; }","duration":"103.483072ms","start":"2024-09-23T10:22:33.438922Z","end":"2024-09-23T10:22:33.542405Z","steps":["trace[1767469839] 'process raft request'  (duration: 102.407175ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:33.736052Z","caller":"traceutil/trace.go:171","msg":"trace[227628294] transaction","detail":"{read_only:false; response_revision:384; number_of_response:1; }","duration":"102.334143ms","start":"2024-09-23T10:22:33.633699Z","end":"2024-09-23T10:22:33.736033Z","steps":["trace[227628294] 'process raft request'  (duration: 13.990139ms)","trace[227628294] 'compare'  (duration: 85.643779ms)"],"step_count":2}
	{"level":"info","ts":"2024-09-23T10:22:33.736225Z","caller":"traceutil/trace.go:171","msg":"trace[2102522964] transaction","detail":"{read_only:false; response_revision:385; number_of_response:1; }","duration":"101.939414ms","start":"2024-09-23T10:22:33.634278Z","end":"2024-09-23T10:22:33.736218Z","steps":["trace[2102522964] 'process raft request'  (duration: 99.195559ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.032263Z","caller":"traceutil/trace.go:171","msg":"trace[1847492038] transaction","detail":"{read_only:false; response_revision:398; number_of_response:1; }","duration":"100.284349ms","start":"2024-09-23T10:22:33.931958Z","end":"2024-09-23T10:22:34.032242Z","steps":["trace[1847492038] 'process raft request'  (duration: 99.986846ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:22:34.130120Z","caller":"traceutil/trace.go:171","msg":"trace[300160576] transaction","detail":"{read_only:false; response_revision:400; number_of_response:1; }","duration":"190.991092ms","start":"2024-09-23T10:22:33.939083Z","end":"2024-09-23T10:22:34.130074Z","steps":["trace[300160576] 'process raft request'  (duration: 108.050293ms)","trace[300160576] 'store kv pair into bolt db' {req_type:put; key:/registry/deployments/kube-system/coredns; req_size:4078; } (duration: 77.321365ms)"],"step_count":2}
	{"level":"warn","ts":"2024-09-23T10:22:34.431549Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.369404ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-09-23T10:22:34.431682Z","caller":"traceutil/trace.go:171","msg":"trace[1877297112] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:429; }","duration":"100.50784ms","start":"2024-09-23T10:22:34.331159Z","end":"2024-09-23T10:22:34.431667Z","steps":["trace[1877297112] 'agreement among raft nodes before linearized reading'  (duration: 100.356061ms)"],"step_count":1}
	{"level":"info","ts":"2024-09-23T10:32:21.645850Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1524}
	{"level":"info","ts":"2024-09-23T10:32:21.668993Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1524,"took":"22.717488ms","hash":1048422649,"current-db-size-bytes":6332416,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":3301376,"current-db-size-in-use":"3.3 MB"}
	{"level":"info","ts":"2024-09-23T10:32:21.669036Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1048422649,"revision":1524,"compact-revision":-1}
	{"level":"info","ts":"2024-09-23T10:37:21.650493Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1945}
	{"level":"info","ts":"2024-09-23T10:37:21.666479Z","caller":"mvcc/kvstore_compaction.go:69","msg":"finished scheduled compaction","compact-revision":1945,"took":"15.50771ms","hash":1115190244,"current-db-size-bytes":6332416,"current-db-size":"6.3 MB","current-db-size-in-use-bytes":4878336,"current-db-size-in-use":"4.9 MB"}
	{"level":"info","ts":"2024-09-23T10:37:21.666530Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1115190244,"revision":1945,"compact-revision":1524}
	
	
	==> gcp-auth [595e24a79c3ccf249c4aaed9888b59fd920080ef1b7290f246cb0006fc71308a] <==
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:25:14 Ready to marshal response ...
	2024/09/23 10:25:14 Ready to write response ...
	2024/09/23 10:33:27 Ready to marshal response ...
	2024/09/23 10:33:27 Ready to write response ...
	2024/09/23 10:33:35 Ready to marshal response ...
	2024/09/23 10:33:35 Ready to write response ...
	2024/09/23 10:33:38 Ready to marshal response ...
	2024/09/23 10:33:38 Ready to write response ...
	2024/09/23 10:33:52 Ready to marshal response ...
	2024/09/23 10:33:52 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:09 Ready to marshal response ...
	2024/09/23 10:34:09 Ready to write response ...
	2024/09/23 10:34:22 Ready to marshal response ...
	2024/09/23 10:34:22 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:34:42 Ready to marshal response ...
	2024/09/23 10:34:42 Ready to write response ...
	2024/09/23 10:36:03 Ready to marshal response ...
	2024/09/23 10:36:03 Ready to write response ...
	
	
	==> kernel <==
	 10:38:57 up 21 min,  0 users,  load average: 0.05, 0.21, 0.22
	Linux addons-445250 5.15.0-1069-gcp #77~20.04.1-Ubuntu SMP Sun Sep 1 19:39:16 UTC 2024 x86_64 x86_64 x86_64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [3fc705a9a77475623860167e13c37b8d8b11d4c5c6af8fae6d5c34389a954147] <==
	I0923 10:36:54.637631       1 main.go:299] handling current node
	I0923 10:37:04.629618       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:04.629659       1 main.go:299] handling current node
	I0923 10:37:14.629798       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:14.629843       1 main.go:299] handling current node
	I0923 10:37:24.633631       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:24.633664       1 main.go:299] handling current node
	I0923 10:37:34.628949       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:34.628984       1 main.go:299] handling current node
	I0923 10:37:44.633601       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:44.633632       1 main.go:299] handling current node
	I0923 10:37:54.633639       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:37:54.633692       1 main.go:299] handling current node
	I0923 10:38:04.630701       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:04.630740       1 main.go:299] handling current node
	I0923 10:38:14.629632       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:14.629664       1 main.go:299] handling current node
	I0923 10:38:24.633580       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:24.633612       1 main.go:299] handling current node
	I0923 10:38:34.629256       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:34.629298       1 main.go:299] handling current node
	I0923 10:38:44.633601       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:44.633638       1 main.go:299] handling current node
	I0923 10:38:54.633586       1 main.go:295] Handling node with IPs: map[192.168.49.2:{}]
	I0923 10:38:54.633621       1 main.go:299] handling current node
	
	
	==> kube-apiserver [8b87d8d2ee711a2921823603f22c8c2140afea99251b8f190bb97757bd569ff1] <==
	E0923 10:24:46.576989       1 controller.go:146] "Unhandled Error" err=<
		Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
		, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	 > logger="UnhandledError"
	I0923 10:24:46.587407       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0923 10:33:33.261205       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W0923 10:33:34.276041       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I0923 10:33:38.712682       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I0923 10:33:39.049386       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.47.123"}
	I0923 10:33:49.532462       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0923 10:34:08.670342       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.670392       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685068       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.685107       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.685195       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738295       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.738544       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0923 10:34:08.826492       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0923 10:34:08.826532       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0923 10:34:09.686230       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0923 10:34:09.826884       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W0923 10:34:09.841944       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	E0923 10:34:38.707943       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0923 10:34:42.655531       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.100.246.242"}
	I0923 10:36:04.066372       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.16.78"}
	
	
	==> kube-controller-manager [3fc6d875aa9536ba5892ec372ce89a5831dcc0255f413af3a1ea53305e8fac86] <==
	W0923 10:36:49.886136       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:49.886176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:36:55.624229       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:36:55.624274       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:02.664099       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:02.664144       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:03.553818       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:03.553859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:46.578809       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:46.578846       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:46.877934       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:46.877980       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:47.319275       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:47.319318       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:37:51.027408       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:37:51.027450       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:38:37.510962       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:38:37.511011       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:38:40.142501       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:38:40.142543       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:38:46.399185       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:38:46.399227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W0923 10:38:50.137213       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0923 10:38:50.137257       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I0923 10:38:55.763475       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/metrics-server-84c5f94fbc" duration="9.232µs"
	
	
	==> kube-proxy [60d69acfd0786cf56d3c3420b0cc9dfab72750dc62c8d323ac0f65a161bdcb41] <==
	I0923 10:22:34.431903       1 server_linux.go:66] "Using iptables proxy"
	I0923 10:22:35.042477       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E0923 10:22:35.042566       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0923 10:22:35.338576       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I0923 10:22:35.338730       1 server_linux.go:169] "Using iptables Proxier"
	I0923 10:22:35.342534       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0923 10:22:35.342914       1 server.go:483] "Version info" version="v1.31.1"
	I0923 10:22:35.342944       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0923 10:22:35.344273       1 config.go:105] "Starting endpoint slice config controller"
	I0923 10:22:35.344364       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0923 10:22:35.344302       1 config.go:328] "Starting node config controller"
	I0923 10:22:35.344482       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0923 10:22:35.344292       1 config.go:199] "Starting service config controller"
	I0923 10:22:35.344524       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0923 10:22:35.445049       1 shared_informer.go:320] Caches are synced for service config
	I0923 10:22:35.445083       1 shared_informer.go:320] Caches are synced for node config
	I0923 10:22:35.445054       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [5e1692605ef5b62d4e07545688c4cc5421df0aedcc72759999cfb7db050a2ff9] <==
	E0923 10:22:23.044121       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044074       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044627       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0923 10:22:23.044704       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:23.044717       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044254       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.044734       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	E0923 10:22:23.044753       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.044774       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:23.044800       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.045089       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0923 10:22:23.045151       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.983340       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0923 10:22:23.983386       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:23.986617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0923 10:22:23.986665       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.010130       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0923 10:22:24.010176       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.047286       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0923 10:22:24.047439       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.182956       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0923 10:22:24.183033       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0923 10:22:24.191245       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0923 10:22:24.191331       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0923 10:22:24.442656       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Sep 23 10:38:05 addons-445250 kubelet[1645]: E0923 10:38:05.877647    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087885877325119,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:09 addons-445250 kubelet[1645]: E0923 10:38:09.542279    1645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cf5ff0cb-1670-40c0-b132-16e835022e57"
	Sep 23 10:38:15 addons-445250 kubelet[1645]: E0923 10:38:15.879777    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087895879517591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:15 addons-445250 kubelet[1645]: E0923 10:38:15.879819    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087895879517591,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:23 addons-445250 kubelet[1645]: E0923 10:38:23.542546    1645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cf5ff0cb-1670-40c0-b132-16e835022e57"
	Sep 23 10:38:25 addons-445250 kubelet[1645]: E0923 10:38:25.881630    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087905881426754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:25 addons-445250 kubelet[1645]: E0923 10:38:25.881678    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087905881426754,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:35 addons-445250 kubelet[1645]: E0923 10:38:35.542342    1645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cf5ff0cb-1670-40c0-b132-16e835022e57"
	Sep 23 10:38:35 addons-445250 kubelet[1645]: E0923 10:38:35.884005    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087915883793103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:35 addons-445250 kubelet[1645]: E0923 10:38:35.884037    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087915883793103,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:45 addons-445250 kubelet[1645]: E0923 10:38:45.886178    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087925885984503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:45 addons-445250 kubelet[1645]: E0923 10:38:45.886210    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087925885984503,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:50 addons-445250 kubelet[1645]: E0923 10:38:50.543137    1645 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"busybox\" with ImagePullBackOff: \"Back-off pulling image \\\"gcr.io/k8s-minikube/busybox:1.28.4-glibc\\\"\"" pod="default/busybox" podUID="cf5ff0cb-1670-40c0-b132-16e835022e57"
	Sep 23 10:38:55 addons-445250 kubelet[1645]: E0923 10:38:55.888555    1645 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087935888283256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:55 addons-445250 kubelet[1645]: E0923 10:38:55.888602    1645 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1727087935888283256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:570254,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.029392    1645 scope.go:117] "RemoveContainer" containerID="26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36"
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.044257    1645 scope.go:117] "RemoveContainer" containerID="26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36"
	Sep 23 10:38:57 addons-445250 kubelet[1645]: E0923 10:38:57.044622    1645 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36\": container with ID starting with 26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36 not found: ID does not exist" containerID="26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36"
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.044671    1645 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36"} err="failed to get container status \"26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36\": rpc error: code = NotFound desc = could not find container \"26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36\": container with ID starting with 26fbe31bfc2e32c1f03c1b6ae9aef0119aaa0cf95ca2c09b58f73c4c2b293a36 not found: ID does not exist"
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.066025    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-tmp-dir\") pod \"de3ce7e3-ca3b-4719-baa0-60b0964a15e6\" (UID: \"de3ce7e3-ca3b-4719-baa0-60b0964a15e6\") "
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.066077    1645 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htkwm\" (UniqueName: \"kubernetes.io/projected/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-kube-api-access-htkwm\") pod \"de3ce7e3-ca3b-4719-baa0-60b0964a15e6\" (UID: \"de3ce7e3-ca3b-4719-baa0-60b0964a15e6\") "
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.066404    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "de3ce7e3-ca3b-4719-baa0-60b0964a15e6" (UID: "de3ce7e3-ca3b-4719-baa0-60b0964a15e6"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGidValue ""
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.067793    1645 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-kube-api-access-htkwm" (OuterVolumeSpecName: "kube-api-access-htkwm") pod "de3ce7e3-ca3b-4719-baa0-60b0964a15e6" (UID: "de3ce7e3-ca3b-4719-baa0-60b0964a15e6"). InnerVolumeSpecName "kube-api-access-htkwm". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.167104    1645 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-htkwm\" (UniqueName: \"kubernetes.io/projected/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-kube-api-access-htkwm\") on node \"addons-445250\" DevicePath \"\""
	Sep 23 10:38:57 addons-445250 kubelet[1645]: I0923 10:38:57.167138    1645 reconciler_common.go:288] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/de3ce7e3-ca3b-4719-baa0-60b0964a15e6-tmp-dir\") on node \"addons-445250\" DevicePath \"\""
	
	
	==> storage-provisioner [66c2617c6cdee7295f19941c86a3a9fbb87fd2b16719e15685c22bcccfbae254] <==
	I0923 10:23:15.441142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0923 10:23:15.449522       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0923 10:23:15.449568       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0923 10:23:15.456173       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0923 10:23:15.456300       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f4170979-0bd2-4164-95c1-443418c50fe4", APIVersion:"v1", ResourceVersion:"884", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82 became leader
	I0923 10:23:15.456350       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	I0923 10:23:15.556572       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-445250_f06d8c52-62ab-4c97-b119-1dc16882ef82!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-445250 -n addons-445250
helpers_test.go:261: (dbg) Run:  kubectl --context addons-445250 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/MetricsServer]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-445250 describe pod busybox
helpers_test.go:282: (dbg) kubectl --context addons-445250 describe pod busybox:

                                                
                                                
-- stdout --
	Name:             busybox
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             addons-445250/192.168.49.2
	Start Time:       Mon, 23 Sep 2024 10:25:14 +0000
	Labels:           integration-test=busybox
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.21
	IPs:
	  IP:  10.244.0.21
	Containers:
	  busybox:
	    Container ID:  
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      sleep
	      3600
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:
	      GOOGLE_APPLICATION_CREDENTIALS:  /google-app-creds.json
	      PROJECT_ID:                      this_is_fake
	      GCP_PROJECT:                     this_is_fake
	      GCLOUD_PROJECT:                  this_is_fake
	      GOOGLE_CLOUD_PROJECT:            this_is_fake
	      CLOUDSDK_CORE_PROJECT:           this_is_fake
	    Mounts:
	      /google-app-creds.json from gcp-creds (ro)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-xvh9z (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  kube-api-access-xvh9z:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	  gcp-creds:
	    Type:          HostPath (bare host directory volume)
	    Path:          /var/lib/minikube/google_application_credentials.json
	    HostPathType:  File
	QoS Class:         BestEffort
	Node-Selectors:    <none>
	Tolerations:       node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                   node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                   From               Message
	  ----     ------     ----                  ----               -------
	  Normal   Scheduled  13m                   default-scheduler  Successfully assigned default/busybox to addons-445250
	  Normal   Pulling    12m (x4 over 13m)     kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Warning  Failed     12m (x4 over 13m)     kubelet            Failed to pull image "gcr.io/k8s-minikube/busybox:1.28.4-glibc": unable to retrieve auth token: invalid username/password: unauthorized: authentication failed
	  Warning  Failed     12m (x4 over 13m)     kubelet            Error: ErrImagePull
	  Warning  Failed     11m (x6 over 13m)     kubelet            Error: ImagePullBackOff
	  Normal   BackOff    3m40s (x42 over 13m)  kubelet            Back-off pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/MetricsServer (340.75s)

                                                
                                    

Test pass (299/327)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 18.86
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.06
9 TestDownloadOnly/v1.20.0/DeleteAll 0.2
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.13
12 TestDownloadOnly/v1.31.1/json-events 24.08
13 TestDownloadOnly/v1.31.1/preload-exists 0
17 TestDownloadOnly/v1.31.1/LogsDuration 0.06
18 TestDownloadOnly/v1.31.1/DeleteAll 0.2
19 TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds 0.12
20 TestDownloadOnlyKic 1.05
21 TestBinaryMirror 0.74
22 TestOffline 54.71
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.05
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.05
27 TestAddons/Setup 207.69
31 TestAddons/serial/GCPAuth/Namespaces 0.14
35 TestAddons/parallel/InspektorGadget 10.63
38 TestAddons/parallel/CSI 51.8
39 TestAddons/parallel/Headlamp 17.32
40 TestAddons/parallel/CloudSpanner 5.46
41 TestAddons/parallel/LocalPath 56.77
42 TestAddons/parallel/NvidiaDevicePlugin 6.44
43 TestAddons/parallel/Yakd 10.65
44 TestAddons/StoppedEnableDisable 6
45 TestCertOptions 25.59
46 TestCertExpiration 225.32
48 TestForceSystemdFlag 25
49 TestForceSystemdEnv 36.46
51 TestKVMDriverInstallOrUpdate 5.45
55 TestErrorSpam/setup 19.87
56 TestErrorSpam/start 0.55
57 TestErrorSpam/status 0.83
58 TestErrorSpam/pause 1.46
59 TestErrorSpam/unpause 1.64
60 TestErrorSpam/stop 1.35
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 37.36
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 25.66
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.07
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.35
72 TestFunctional/serial/CacheCmd/cache/add_local 2.08
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.05
74 TestFunctional/serial/CacheCmd/cache/list 0.05
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.27
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.64
77 TestFunctional/serial/CacheCmd/cache/delete 0.09
78 TestFunctional/serial/MinikubeKubectlCmd 0.1
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.1
80 TestFunctional/serial/ExtraConfig 31.32
81 TestFunctional/serial/ComponentHealth 0.07
82 TestFunctional/serial/LogsCmd 1.35
83 TestFunctional/serial/LogsFileCmd 1.35
84 TestFunctional/serial/InvalidService 4.45
86 TestFunctional/parallel/ConfigCmd 0.32
87 TestFunctional/parallel/DashboardCmd 22.24
88 TestFunctional/parallel/DryRun 0.32
89 TestFunctional/parallel/InternationalLanguage 0.16
90 TestFunctional/parallel/StatusCmd 0.83
94 TestFunctional/parallel/ServiceCmdConnect 10.6
95 TestFunctional/parallel/AddonsCmd 0.14
96 TestFunctional/parallel/PersistentVolumeClaim 38.49
98 TestFunctional/parallel/SSHCmd 0.55
99 TestFunctional/parallel/CpCmd 1.73
100 TestFunctional/parallel/MySQL 22.19
101 TestFunctional/parallel/FileSync 0.24
102 TestFunctional/parallel/CertSync 1.47
106 TestFunctional/parallel/NodeLabels 0.07
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.55
110 TestFunctional/parallel/License 0.71
111 TestFunctional/parallel/ProfileCmd/profile_not_create 0.45
112 TestFunctional/parallel/Version/short 0.06
113 TestFunctional/parallel/Version/components 0.82
114 TestFunctional/parallel/ProfileCmd/profile_list 0.38
115 TestFunctional/parallel/ImageCommands/ImageListShort 1.06
116 TestFunctional/parallel/ImageCommands/ImageListTable 0.21
117 TestFunctional/parallel/ImageCommands/ImageListJson 0.21
118 TestFunctional/parallel/ImageCommands/ImageListYaml 0.23
119 TestFunctional/parallel/ImageCommands/ImageBuild 5.7
120 TestFunctional/parallel/ImageCommands/Setup 2.13
121 TestFunctional/parallel/ProfileCmd/profile_json_output 0.37
123 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.4
124 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
126 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 14.23
127 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 2.2
128 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.85
129 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.77
130 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.19
131 TestFunctional/parallel/ImageCommands/ImageRemove 0.48
132 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.73
133 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.52
134 TestFunctional/parallel/ServiceCmd/DeployApp 7.14
135 TestFunctional/parallel/MountCmd/any-port 9.68
136 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.06
137 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
141 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
142 TestFunctional/parallel/UpdateContextCmd/no_changes 0.12
143 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.12
144 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
145 TestFunctional/parallel/ServiceCmd/List 0.48
146 TestFunctional/parallel/ServiceCmd/JSONOutput 0.48
147 TestFunctional/parallel/ServiceCmd/HTTPS 0.53
148 TestFunctional/parallel/ServiceCmd/Format 0.5
149 TestFunctional/parallel/ServiceCmd/URL 0.5
150 TestFunctional/parallel/MountCmd/specific-port 2.46
151 TestFunctional/parallel/MountCmd/VerifyCleanup 2.25
152 TestFunctional/delete_echo-server_images 0.04
153 TestFunctional/delete_my-image_image 0.02
154 TestFunctional/delete_minikube_cached_images 0.02
158 TestMultiControlPlane/serial/StartCluster 151.91
159 TestMultiControlPlane/serial/DeployApp 7.71
160 TestMultiControlPlane/serial/PingHostFromPods 1.01
161 TestMultiControlPlane/serial/AddWorkerNode 32.85
162 TestMultiControlPlane/serial/NodeLabels 0.06
163 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.84
164 TestMultiControlPlane/serial/CopyFile 15.4
165 TestMultiControlPlane/serial/StopSecondaryNode 12.45
166 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.66
167 TestMultiControlPlane/serial/RestartSecondaryNode 33.45
168 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1
169 TestMultiControlPlane/serial/RestartClusterKeepsNodes 198.23
170 TestMultiControlPlane/serial/DeleteSecondaryNode 11.28
171 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.66
172 TestMultiControlPlane/serial/StopCluster 35.53
173 TestMultiControlPlane/serial/RestartCluster 95.02
174 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.65
175 TestMultiControlPlane/serial/AddSecondaryNode 66.32
176 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.84
180 TestJSONOutput/start/Command 70.91
181 TestJSONOutput/start/Audit 0
183 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
184 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
186 TestJSONOutput/pause/Command 0.63
187 TestJSONOutput/pause/Audit 0
189 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
190 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
192 TestJSONOutput/unpause/Command 0.57
193 TestJSONOutput/unpause/Audit 0
195 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
196 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
198 TestJSONOutput/stop/Command 5.73
199 TestJSONOutput/stop/Audit 0
201 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
203 TestErrorJSONOutput 0.2
205 TestKicCustomNetwork/create_custom_network 35.7
206 TestKicCustomNetwork/use_default_bridge_network 25.73
207 TestKicExistingNetwork 26.08
208 TestKicCustomSubnet 26.79
209 TestKicStaticIP 26.15
210 TestMainNoArgs 0.05
211 TestMinikubeProfile 49.23
214 TestMountStart/serial/StartWithMountFirst 9.08
215 TestMountStart/serial/VerifyMountFirst 0.24
216 TestMountStart/serial/StartWithMountSecond 6.22
217 TestMountStart/serial/VerifyMountSecond 0.24
218 TestMountStart/serial/DeleteFirst 1.61
219 TestMountStart/serial/VerifyMountPostDelete 0.24
220 TestMountStart/serial/Stop 1.17
221 TestMountStart/serial/RestartStopped 7.88
222 TestMountStart/serial/VerifyMountPostStop 0.24
225 TestMultiNode/serial/FreshStart2Nodes 68.54
226 TestMultiNode/serial/DeployApp2Nodes 5.43
227 TestMultiNode/serial/PingHostFrom2Pods 0.68
228 TestMultiNode/serial/AddNode 54.85
229 TestMultiNode/serial/MultiNodeLabels 0.07
230 TestMultiNode/serial/ProfileList 0.62
231 TestMultiNode/serial/CopyFile 8.93
232 TestMultiNode/serial/StopNode 2.1
233 TestMultiNode/serial/StartAfterStop 9.14
234 TestMultiNode/serial/RestartKeepsNodes 110.72
235 TestMultiNode/serial/DeleteNode 5.2
236 TestMultiNode/serial/StopMultiNode 23.7
237 TestMultiNode/serial/RestartMultiNode 54.23
238 TestMultiNode/serial/ValidateNameConflict 25.57
243 TestPreload 117.55
245 TestScheduledStopUnix 95.66
248 TestInsufficientStorage 9.68
249 TestRunningBinaryUpgrade 69.95
251 TestKubernetesUpgrade 358.24
252 TestMissingContainerUpgrade 177.88
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.09
258 TestNoKubernetes/serial/StartWithK8s 28.37
263 TestNetworkPlugins/group/false 7.57
267 TestNoKubernetes/serial/StartWithStopK8s 7.1
268 TestNoKubernetes/serial/Start 8.48
269 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
270 TestNoKubernetes/serial/ProfileList 1.83
271 TestNoKubernetes/serial/Stop 1.19
272 TestNoKubernetes/serial/StartNoArgs 7.22
273 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.25
274 TestStoppedBinaryUpgrade/Setup 2.57
275 TestStoppedBinaryUpgrade/Upgrade 141.11
276 TestStoppedBinaryUpgrade/MinikubeLogs 0.94
285 TestPause/serial/Start 45.38
286 TestNetworkPlugins/group/auto/Start 43.91
287 TestNetworkPlugins/group/kindnet/Start 69.36
288 TestPause/serial/SecondStartNoReconfiguration 37.01
289 TestNetworkPlugins/group/auto/KubeletFlags 0.33
290 TestNetworkPlugins/group/auto/NetCatPod 10.24
291 TestNetworkPlugins/group/auto/DNS 0.12
292 TestNetworkPlugins/group/auto/Localhost 0.1
293 TestNetworkPlugins/group/auto/HairPin 0.1
294 TestPause/serial/Pause 0.66
295 TestPause/serial/VerifyStatus 0.29
296 TestPause/serial/Unpause 0.59
297 TestPause/serial/PauseAgain 0.69
298 TestPause/serial/DeletePaused 2.26
299 TestPause/serial/VerifyDeletedResources 15.02
300 TestNetworkPlugins/group/calico/Start 59.69
301 TestNetworkPlugins/group/custom-flannel/Start 53.98
302 TestNetworkPlugins/group/kindnet/ControllerPod 5.07
303 TestNetworkPlugins/group/kindnet/KubeletFlags 0.32
304 TestNetworkPlugins/group/kindnet/NetCatPod 10.26
305 TestNetworkPlugins/group/kindnet/DNS 0.12
306 TestNetworkPlugins/group/kindnet/Localhost 0.11
307 TestNetworkPlugins/group/kindnet/HairPin 0.12
308 TestNetworkPlugins/group/enable-default-cni/Start 68.52
309 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.28
310 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.23
311 TestNetworkPlugins/group/calico/ControllerPod 6.01
312 TestNetworkPlugins/group/calico/KubeletFlags 0.25
313 TestNetworkPlugins/group/calico/NetCatPod 10.18
314 TestNetworkPlugins/group/custom-flannel/DNS 0.14
315 TestNetworkPlugins/group/custom-flannel/Localhost 0.11
316 TestNetworkPlugins/group/custom-flannel/HairPin 0.11
317 TestNetworkPlugins/group/calico/DNS 0.14
318 TestNetworkPlugins/group/calico/Localhost 0.11
319 TestNetworkPlugins/group/calico/HairPin 0.12
320 TestNetworkPlugins/group/flannel/Start 57.34
321 TestNetworkPlugins/group/bridge/Start 64.17
323 TestStartStop/group/old-k8s-version/serial/FirstStart 130.09
324 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.35
325 TestNetworkPlugins/group/enable-default-cni/NetCatPod 10.24
326 TestNetworkPlugins/group/enable-default-cni/DNS 0.14
327 TestNetworkPlugins/group/enable-default-cni/Localhost 0.14
328 TestNetworkPlugins/group/enable-default-cni/HairPin 0.17
329 TestNetworkPlugins/group/flannel/ControllerPod 6.01
331 TestStartStop/group/no-preload/serial/FirstStart 61
332 TestNetworkPlugins/group/flannel/KubeletFlags 0.26
333 TestNetworkPlugins/group/flannel/NetCatPod 9.2
334 TestNetworkPlugins/group/flannel/DNS 0.15
335 TestNetworkPlugins/group/flannel/Localhost 0.13
336 TestNetworkPlugins/group/flannel/HairPin 0.14
337 TestNetworkPlugins/group/bridge/KubeletFlags 0.34
338 TestNetworkPlugins/group/bridge/NetCatPod 10.21
339 TestNetworkPlugins/group/bridge/DNS 0.14
340 TestNetworkPlugins/group/bridge/Localhost 0.12
341 TestNetworkPlugins/group/bridge/HairPin 0.15
343 TestStartStop/group/embed-certs/serial/FirstStart 45.6
345 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 71.41
346 TestStartStop/group/no-preload/serial/DeployApp 9.24
347 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.86
348 TestStartStop/group/no-preload/serial/Stop 12.12
349 TestStartStop/group/embed-certs/serial/DeployApp 10.26
350 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.17
351 TestStartStop/group/no-preload/serial/SecondStart 261.94
352 TestStartStop/group/old-k8s-version/serial/DeployApp 10.4
353 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.98
354 TestStartStop/group/embed-certs/serial/Stop 12.39
355 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.88
356 TestStartStop/group/old-k8s-version/serial/Stop 12.44
357 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.18
358 TestStartStop/group/embed-certs/serial/SecondStart 262.9
359 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.17
360 TestStartStop/group/old-k8s-version/serial/SecondStart 128.45
361 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.29
362 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.94
363 TestStartStop/group/default-k8s-diff-port/serial/Stop 12.59
364 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.19
365 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 261.92
366 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
367 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.08
368 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.22
369 TestStartStop/group/old-k8s-version/serial/Pause 2.48
371 TestStartStop/group/newest-cni/serial/FirstStart 28.64
372 TestStartStop/group/newest-cni/serial/DeployApp 0
373 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.76
374 TestStartStop/group/newest-cni/serial/Stop 1.2
375 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.17
376 TestStartStop/group/newest-cni/serial/SecondStart 12.64
377 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
378 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
379 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.22
380 TestStartStop/group/newest-cni/serial/Pause 2.64
381 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
382 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.07
383 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.22
384 TestStartStop/group/no-preload/serial/Pause 2.61
385 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
386 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
387 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.22
388 TestStartStop/group/embed-certs/serial/Pause 2.59
389 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
390 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.07
391 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.22
392 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.55
x
+
TestDownloadOnly/v1.20.0/json-events (18.86s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-764506 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-764506 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (18.858400607s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (18.86s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0923 10:21:19.745625   10562 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I0923 10:21:19.745720   10562 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-764506
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-764506: exit status 85 (58.834997ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-764506 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |          |
	|         | -p download-only-764506        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:00
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:00.923659   10573 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:00.923796   10573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:00.923806   10573 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:00.923812   10573 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:00.924000   10573 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	W0923 10:21:00.924184   10573 root.go:314] Error reading config file at /home/jenkins/minikube-integration/19689-3772/.minikube/config/config.json: open /home/jenkins/minikube-integration/19689-3772/.minikube/config/config.json: no such file or directory
	I0923 10:21:00.924804   10573 out.go:352] Setting JSON to true
	I0923 10:21:00.925805   10573 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":205,"bootTime":1727086656,"procs":177,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:00.925903   10573 start.go:139] virtualization: kvm guest
	I0923 10:21:00.928596   10573 out.go:97] [download-only-764506] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	W0923 10:21:00.928730   10573 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball: no such file or directory
	I0923 10:21:00.928797   10573 notify.go:220] Checking for updates...
	I0923 10:21:00.930472   10573 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:21:00.932209   10573 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:00.933812   10573 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:21:00.935215   10573 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:21:00.936702   10573 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:21:00.939505   10573 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:21:00.939733   10573 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:00.961364   10573 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:21:00.961437   10573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:01.344011   10573 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:21:01.334354243 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:01.344104   10573 docker.go:318] overlay module found
	I0923 10:21:01.346208   10573 out.go:97] Using the docker driver based on user configuration
	I0923 10:21:01.346231   10573 start.go:297] selected driver: docker
	I0923 10:21:01.346239   10573 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:01.346332   10573 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:01.395299   10573 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:21:01.386265192 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:01.395501   10573 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:01.396320   10573 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 10:21:01.396546   10573 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:21:01.398785   10573 out.go:169] Using Docker driver with root privileges
	I0923 10:21:01.400259   10573 cni.go:84] Creating CNI manager for ""
	I0923 10:21:01.400344   10573 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:21:01.400357   10573 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:01.400443   10573 start.go:340] cluster config:
	{Name:download-only-764506 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-764506 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:01.402103   10573 out.go:97] Starting "download-only-764506" primary control-plane node in "download-only-764506" cluster
	I0923 10:21:01.402152   10573 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 10:21:01.403493   10573 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:01.403520   10573 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 10:21:01.403667   10573 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:01.419600   10573 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:01.419769   10573 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:01.419853   10573 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:01.540166   10573 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:01.540210   10573 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:01.540391   10573 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0923 10:21:01.542471   10573 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0923 10:21:01.542493   10573 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:01.650248   10573 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:15.042459   10573 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:15.042567   10573 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-764506 host does not exist
	  To start a cluster, run: "minikube start -p download-only-764506"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-764506
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/json-events (24.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-662224 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-662224 --force --alsologtostderr --kubernetes-version=v1.31.1 --container-runtime=crio --driver=docker  --container-runtime=crio: (24.075143257s)
--- PASS: TestDownloadOnly/v1.31.1/json-events (24.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/preload-exists
I0923 10:21:44.210182   10562 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
I0923 10:21:44.210220   10562 preload.go:146] Found local preload: /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.1/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-662224
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-662224: exit status 85 (59.209093ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-764506 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-764506        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| delete  | -p download-only-764506        | download-only-764506 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC | 23 Sep 24 10:21 UTC |
	| start   | -o=json --download-only        | download-only-662224 | jenkins | v1.34.0 | 23 Sep 24 10:21 UTC |                     |
	|         | -p download-only-662224        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.1   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/09/23 10:21:20
	Running on machine: ubuntu-20-agent
	Binary: Built with gc go1.23.0 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0923 10:21:20.170892   10967 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:21:20.171163   10967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:20.171173   10967 out.go:358] Setting ErrFile to fd 2...
	I0923 10:21:20.171179   10967 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:21:20.171346   10967 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:21:20.171895   10967 out.go:352] Setting JSON to true
	I0923 10:21:20.172704   10967 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":224,"bootTime":1727086656,"procs":175,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:21:20.172765   10967 start.go:139] virtualization: kvm guest
	I0923 10:21:20.175070   10967 out.go:97] [download-only-662224] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:21:20.175217   10967 notify.go:220] Checking for updates...
	I0923 10:21:20.176611   10967 out.go:169] MINIKUBE_LOCATION=19689
	I0923 10:21:20.178253   10967 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:21:20.179686   10967 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:21:20.181098   10967 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:21:20.182735   10967 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0923 10:21:20.185346   10967 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0923 10:21:20.185587   10967 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:21:20.206618   10967 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:21:20.206684   10967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:20.252855   10967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:20.243418282 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:20.252963   10967 docker.go:318] overlay module found
	I0923 10:21:20.254802   10967 out.go:97] Using the docker driver based on user configuration
	I0923 10:21:20.254834   10967 start.go:297] selected driver: docker
	I0923 10:21:20.254840   10967 start.go:901] validating driver "docker" against <nil>
	I0923 10:21:20.254921   10967 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:21:20.300262   10967 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:30 OomKillDisable:true NGoroutines:45 SystemTime:2024-09-23 10:21:20.29152359 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x8
6_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:21:20.300411   10967 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0923 10:21:20.300910   10967 start_flags.go:393] Using suggested 8000MB memory alloc based on sys=32089MB, container=32089MB
	I0923 10:21:20.301055   10967 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0923 10:21:20.302989   10967 out.go:169] Using Docker driver with root privileges
	I0923 10:21:20.304549   10967 cni.go:84] Creating CNI manager for ""
	I0923 10:21:20.304611   10967 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I0923 10:21:20.304621   10967 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0923 10:21:20.304684   10967 start.go:340] cluster config:
	{Name:download-only-662224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:8000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:download-only-662224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:21:20.306164   10967 out.go:97] Starting "download-only-662224" primary control-plane node in "download-only-662224" cluster
	I0923 10:21:20.306181   10967 cache.go:121] Beginning downloading kic base image for docker with crio
	I0923 10:21:20.307554   10967 out.go:97] Pulling base image v0.0.45-1726784731-19672 ...
	I0923 10:21:20.307575   10967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:20.307617   10967 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local docker daemon
	I0923 10:21:20.323466   10967 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed to local cache
	I0923 10:21:20.323612   10967 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory
	I0923 10:21:20.323634   10967 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed in local cache directory, skipping pull
	I0923 10:21:20.323642   10967 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed exists in cache, skipping pull
	I0923 10:21:20.323651   10967 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed as a tarball
	I0923 10:21:20.790397   10967 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	I0923 10:21:20.790445   10967 cache.go:56] Caching tarball of preloaded images
	I0923 10:21:20.790604   10967 preload.go:131] Checking if preload exists for k8s version v1.31.1 and runtime crio
	I0923 10:21:20.792499   10967 out.go:97] Downloading Kubernetes v1.31.1 preload ...
	I0923 10:21:20.792558   10967 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4 ...
	I0923 10:21:20.910475   10967 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.1/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4?checksum=md5:aa79045e4550b9510ee496fee0d50abb -> /home/jenkins/minikube-integration/19689-3772/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.1-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-662224 host does not exist
	  To start a cluster, run: "minikube start -p download-only-662224"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.1/LogsDuration (0.06s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAll (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.1/DeleteAll (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-662224
--- PASS: TestDownloadOnly/v1.31.1/DeleteAlwaysSucceeds (0.12s)

                                                
                                    
x
+
TestDownloadOnlyKic (1.05s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p download-docker-581243 --alsologtostderr --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "download-docker-581243" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p download-docker-581243
--- PASS: TestDownloadOnlyKic (1.05s)

                                                
                                    
x
+
TestBinaryMirror (0.74s)

                                                
                                                
=== RUN   TestBinaryMirror
I0923 10:21:45.886499   10562 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.1/bin/linux/amd64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-083835 --alsologtostderr --binary-mirror http://127.0.0.1:40991 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-083835" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-083835
--- PASS: TestBinaryMirror (0.74s)

                                                
                                    
x
+
TestOffline (54.71s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-391658 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-391658 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=docker  --container-runtime=crio: (52.35050316s)
helpers_test.go:175: Cleaning up "offline-crio-391658" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-391658
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-391658: (2.357235673s)
--- PASS: TestOffline (54.71s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:975: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-445250
addons_test.go:975: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-445250: exit status 85 (53.09472ms)

                                                
                                                
-- stdout --
	* Profile "addons-445250" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-445250"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:986: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-445250
addons_test.go:986: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-445250: exit status 85 (52.682098ms)

                                                
                                                
-- stdout --
	* Profile "addons-445250" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-445250"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.05s)

                                                
                                    
x
+
TestAddons/Setup (207.69s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-amd64 start -p addons-445250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns
addons_test.go:107: (dbg) Done: out/minikube-linux-amd64 start -p addons-445250 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns: (3m27.689532185s)
--- PASS: TestAddons/Setup (207.69s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:594: (dbg) Run:  kubectl --context addons-445250 create ns new-namespace
addons_test.go:608: (dbg) Run:  kubectl --context addons-445250 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.14s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-lpfxz" [25bd34ff-7c54-4525-87e2-121cbc2d4507] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:786: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00385924s
addons_test.go:789: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-445250
addons_test.go:789: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-445250: (5.629395381s)
--- PASS: TestAddons/parallel/InspektorGadget (10.63s)

                                                
                                    
x
+
TestAddons/parallel/CSI (51.8s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:505: csi-hostpath-driver pods stabilized in 3.847031ms
addons_test.go:508: (dbg) Run:  kubectl --context addons-445250 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:513: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:518: (dbg) Run:  kubectl --context addons-445250 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:523: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [d50050a6-5c3e-4661-b149-600447bb19d5] Pending
helpers_test.go:344: "task-pv-pod" [d50050a6-5c3e-4661-b149-600447bb19d5] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [d50050a6-5c3e-4661-b149-600447bb19d5] Running
addons_test.go:523: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.00317201s
addons_test.go:528: (dbg) Run:  kubectl --context addons-445250 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:533: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-445250 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-445250 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:538: (dbg) Run:  kubectl --context addons-445250 delete pod task-pv-pod
addons_test.go:544: (dbg) Run:  kubectl --context addons-445250 delete pvc hpvc
addons_test.go:550: (dbg) Run:  kubectl --context addons-445250 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:555: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:560: (dbg) Run:  kubectl --context addons-445250 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:565: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [6867e005-fef4-47d2-8589-d98d01134d48] Pending
helpers_test.go:344: "task-pv-pod-restore" [6867e005-fef4-47d2-8589-d98d01134d48] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [6867e005-fef4-47d2-8589-d98d01134d48] Running
addons_test.go:565: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.003035006s
addons_test.go:570: (dbg) Run:  kubectl --context addons-445250 delete pod task-pv-pod-restore
addons_test.go:574: (dbg) Run:  kubectl --context addons-445250 delete pvc hpvc-restore
addons_test.go:578: (dbg) Run:  kubectl --context addons-445250 delete volumesnapshot new-snapshot-demo
addons_test.go:582: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:582: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.523238503s)
addons_test.go:586: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (51.80s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (17.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:768: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-445250 --alsologtostderr -v=1
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7b5c95b59d-x6lkf" [9252897c-b313-4503-81a2-768efef61a66] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7b5c95b59d-x6lkf" [9252897c-b313-4503-81a2-768efef61a66] Running
addons_test.go:773: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.003814486s
addons_test.go:777: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable headlamp --alsologtostderr -v=1
addons_test.go:777: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable headlamp --alsologtostderr -v=1: (5.594768067s)
--- PASS: TestAddons/parallel/Headlamp (17.32s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-5b584cc74-rztwp" [c3f9826a-1438-4c94-b226-2dd68065ec79] Running
addons_test.go:805: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.004132649s
addons_test.go:808: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-445250
--- PASS: TestAddons/parallel/CloudSpanner (5.46s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (56.77s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:920: (dbg) Run:  kubectl --context addons-445250 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:926: (dbg) Run:  kubectl --context addons-445250 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:930: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [f5d29ae9-86e7-4900-ae5c-d94f276baf8b] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [f5d29ae9-86e7-4900-ae5c-d94f276baf8b] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [f5d29ae9-86e7-4900-ae5c-d94f276baf8b] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:933: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 6.003201793s
addons_test.go:938: (dbg) Run:  kubectl --context addons-445250 get pvc test-pvc -o=json
addons_test.go:947: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 ssh "cat /opt/local-path-provisioner/pvc-f2f3f271-6db1-4176-931b-e93dd714c1c9_default_test-pvc/file1"
addons_test.go:959: (dbg) Run:  kubectl --context addons-445250 delete pod test-local-path
addons_test.go:963: (dbg) Run:  kubectl --context addons-445250 delete pvc test-pvc
addons_test.go:967: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:967: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.967250873s)
--- PASS: TestAddons/parallel/LocalPath (56.77s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-649c2" [ad56c28d-1cef-404e-a46b-44ed08feea84] Running
addons_test.go:999: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003667436s
addons_test.go:1002: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-445250
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.44s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (10.65s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-k7g86" [7826be82-3965-41c1-ab64-87f8ed78b529] Running
addons_test.go:1010: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.003756715s
addons_test.go:1014: (dbg) Run:  out/minikube-linux-amd64 -p addons-445250 addons disable yakd --alsologtostderr -v=1
addons_test.go:1014: (dbg) Done: out/minikube-linux-amd64 -p addons-445250 addons disable yakd --alsologtostderr -v=1: (5.642155465s)
--- PASS: TestAddons/parallel/Yakd (10.65s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (6s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-445250
addons_test.go:170: (dbg) Done: out/minikube-linux-amd64 stop -p addons-445250: (5.766333129s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-445250
addons_test.go:178: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-445250
addons_test.go:183: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-445250
--- PASS: TestAddons/StoppedEnableDisable (6.00s)

                                                
                                    
x
+
TestCertOptions (25.59s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-577918 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-577918 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (23.15678152s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-577918 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-577918 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-577918 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-577918" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-577918
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-options-577918: (1.86384944s)
--- PASS: TestCertOptions (25.59s)

                                                
                                    
x
+
TestCertExpiration (225.32s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-290782 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-290782 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (25.694968778s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-290782 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-290782 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (15.083187795s)
helpers_test.go:175: Cleaning up "cert-expiration-290782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-290782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p cert-expiration-290782: (4.541859126s)
--- PASS: TestCertExpiration (225.32s)

                                                
                                    
x
+
TestForceSystemdFlag (25s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-225895 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-225895 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (21.987979193s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-225895 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-225895" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-225895
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-225895: (2.758646037s)
--- PASS: TestForceSystemdFlag (25.00s)

                                                
                                    
x
+
TestForceSystemdEnv (36.46s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-433419 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-433419 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (33.928592235s)
helpers_test.go:175: Cleaning up "force-systemd-env-433419" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-433419
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-433419: (2.528920376s)
--- PASS: TestForceSystemdEnv (36.46s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (5.45s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
I0923 11:12:48.660821   10562 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:12:48.660963   10562 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-without-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
W0923 11:12:48.689109   10562 install.go:62] docker-machine-driver-kvm2: exit status 1
W0923 11:12:48.689432   10562 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:12:48.689487   10562 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172028562/001/docker-machine-driver-kvm2
I0923 11:12:48.949318   10562 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3172028562/001/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0000e9d10 gz:0xc0000e9d18 tar:0xc0000e9c70 tar.bz2:0xc0000e9c80 tar.gz:0xc0000e9cd0 tar.xz:0xc0000e9ce0 tar.zst:0xc0000e9cf0 tbz2:0xc0000e9c80 tgz:0xc0000e9cd0 txz:0xc0000e9ce0 tzst:0xc0000e9cf0 xz:0xc0000e9d20 zip:0xc0000e9d30 zst:0xc0000e9d28] Getters:map[file:0xc0014c9b90 http:0xc001c96460 https:0xc001c964b0] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:12:48.949358   10562 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172028562/001/docker-machine-driver-kvm2
I0923 11:12:52.229468   10562 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0923 11:12:52.229597   10562 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version:/home/jenkins/workspace/Docker_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
I0923 11:12:52.261920   10562 install.go:137] /home/jenkins/workspace/Docker_Linux_crio_integration/testdata/kvm2-driver-older-version/docker-machine-driver-kvm2 version is 1.1.1
W0923 11:12:52.261959   10562 install.go:62] docker-machine-driver-kvm2: docker-machine-driver-kvm2 is version 1.1.1, want 1.3.0
W0923 11:12:52.262021   10562 out.go:174] [unset outFile]: * Downloading driver docker-machine-driver-kvm2:
I0923 11:12:52.262055   10562 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172028562/002/docker-machine-driver-kvm2
I0923 11:12:52.320459   10562 driver.go:46] failed to download arch specific driver: getter: &{Ctx:context.Background Src:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2-amd64.sha256 Dst:/tmp/TestKVMDriverInstallOrUpdate3172028562/002/docker-machine-driver-kvm2.download Pwd: Mode:2 Umask:---------- Detectors:[0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640 0x4665640] Decompressors:map[bz2:0xc0000e9d10 gz:0xc0000e9d18 tar:0xc0000e9c70 tar.bz2:0xc0000e9c80 tar.gz:0xc0000e9cd0 tar.xz:0xc0000e9ce0 tar.zst:0xc0000e9cf0 tbz2:0xc0000e9c80 tgz:0xc0000e9cd0 txz:0xc0000e9ce0 tzst:0xc0000e9cf0 xz:0xc0000e9d20 zip:0xc0000e9d30 zst:0xc0000e9d28] Getters:map[file:0xc0009659c0 http:0xc001c97400 https:0xc001c97450] Dir:false ProgressListener:<nil> Insecure:false DisableSymlinks:false Options:[]}: invalid checksum: Error downloading checksum file: bad response co
de: 404. trying to get the common version
I0923 11:12:52.320510   10562 download.go:107] Downloading: https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2?checksum=file:https://github.com/kubernetes/minikube/releases/download/v1.3.0/docker-machine-driver-kvm2.sha256 -> /tmp/TestKVMDriverInstallOrUpdate3172028562/002/docker-machine-driver-kvm2
--- PASS: TestKVMDriverInstallOrUpdate (5.45s)

                                                
                                    
x
+
TestErrorSpam/setup (19.87s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-833264 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-833264 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-833264 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-833264 --driver=docker  --container-runtime=crio: (19.870976525s)
--- PASS: TestErrorSpam/setup (19.87s)

                                                
                                    
x
+
TestErrorSpam/start (0.55s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 start --dry-run
--- PASS: TestErrorSpam/start (0.55s)

                                                
                                    
x
+
TestErrorSpam/status (0.83s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 status
--- PASS: TestErrorSpam/status (0.83s)

                                                
                                    
x
+
TestErrorSpam/pause (1.46s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 pause
--- PASS: TestErrorSpam/pause (1.46s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.64s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 unpause
--- PASS: TestErrorSpam/unpause (1.64s)

                                                
                                    
x
+
TestErrorSpam/stop (1.35s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 stop: (1.179425908s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-833264 --log_dir /tmp/nospam-833264 stop
--- PASS: TestErrorSpam/stop (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/19689-3772/.minikube/files/etc/test/nested/copy/10562/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (37.36s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
E0923 10:40:14.701850   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:14.708308   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:14.719723   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:14.741166   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:14.782623   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:14.864027   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:15.025685   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:15.347361   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:15.989397   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:2234: (dbg) Done: out/minikube-linux-amd64 start -p functional-676470 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (37.360180525s)
--- PASS: TestFunctional/serial/StartWithProxy (37.36s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.66s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0923 10:40:17.192982   10562 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
functional_test.go:659: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --alsologtostderr -v=8
E0923 10:40:17.271024   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:19.833091   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:24.954941   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:40:35.197225   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:659: (dbg) Done: out/minikube-linux-amd64 start -p functional-676470 --alsologtostderr -v=8: (25.663500806s)
functional_test.go:663: soft start took 25.664598666s for "functional-676470" cluster.
I0923 10:40:42.856835   10562 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/SoftStart (25.66s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-676470 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:3.1: (1.072792022s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:3.3: (1.220877836s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 cache add registry.k8s.io/pause:latest: (1.054917439s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-676470 /tmp/TestFunctionalserialCacheCmdcacheadd_local1825905082/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache add minikube-local-cache-test:functional-676470
functional_test.go:1089: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 cache add minikube-local-cache-test:functional-676470: (1.753776306s)
functional_test.go:1094: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache delete minikube-local-cache-test:functional-676470
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-676470
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (2.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.05s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (259.82947ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cache reload
functional_test.go:1163: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 kubectl -- --context functional-676470 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.10s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.1s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-676470 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.10s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (31.32s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0923 10:40:55.678922   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:757: (dbg) Done: out/minikube-linux-amd64 start -p functional-676470 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (31.315440951s)
functional_test.go:761: restart took 31.315575983s for "functional-676470" cluster.
I0923 10:41:22.022379   10562 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestFunctional/serial/ExtraConfig (31.32s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-676470 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 logs: (1.348934705s)
--- PASS: TestFunctional/serial/LogsCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 logs --file /tmp/TestFunctionalserialLogsFileCmd2807842003/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 logs --file /tmp/TestFunctionalserialLogsFileCmd2807842003/001/logs.txt: (1.351293798s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.35s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.45s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-676470 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-676470
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-676470: exit status 115 (313.659478ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31184 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-676470 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.45s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 config get cpus: exit status 14 (54.512248ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 config get cpus: exit status 14 (45.089858ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (22.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-676470 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-676470 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 56070: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (22.24s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-676470 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (140.896379ms)

                                                
                                                
-- stdout --
	* [functional-676470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:41:49.340274   53612 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:41:49.340370   53612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:41:49.340378   53612 out.go:358] Setting ErrFile to fd 2...
	I0923 10:41:49.340382   53612 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:41:49.340571   53612 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:41:49.341081   53612 out.go:352] Setting JSON to false
	I0923 10:41:49.342145   53612 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1453,"bootTime":1727086656,"procs":234,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:41:49.342248   53612 start.go:139] virtualization: kvm guest
	I0923 10:41:49.344331   53612 out.go:177] * [functional-676470] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 10:41:49.345780   53612 notify.go:220] Checking for updates...
	I0923 10:41:49.345798   53612 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:41:49.347354   53612 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:41:49.348836   53612 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:41:49.350011   53612 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:41:49.351397   53612 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:41:49.352811   53612 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:41:49.354724   53612 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:41:49.355368   53612 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:41:49.380666   53612 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:41:49.380793   53612 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:41:49.426722   53612 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:41:49.417669688 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:41:49.426830   53612 docker.go:318] overlay module found
	I0923 10:41:49.428726   53612 out.go:177] * Using the docker driver based on existing profile
	I0923 10:41:49.429960   53612 start.go:297] selected driver: docker
	I0923 10:41:49.429981   53612 start.go:901] validating driver "docker" against &{Name:functional-676470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-676470 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:41:49.430059   53612 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:41:49.432408   53612 out.go:201] 
	W0923 10:41:49.433790   53612 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0923 10:41:49.435173   53612 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-amd64 start -p functional-676470 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-676470 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (161.033684ms)

                                                
                                                
-- stdout --
	* [functional-676470] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:41:41.094522   51234 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:41:41.094634   51234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:41:41.094641   51234 out.go:358] Setting ErrFile to fd 2...
	I0923 10:41:41.094650   51234 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:41:41.094936   51234 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:41:41.095531   51234 out.go:352] Setting JSON to false
	I0923 10:41:41.096572   51234 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":1445,"bootTime":1727086656,"procs":223,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 10:41:41.096646   51234 start.go:139] virtualization: kvm guest
	I0923 10:41:41.099167   51234 out.go:177] * [functional-676470] minikube v1.34.0 sur Ubuntu 20.04 (kvm/amd64)
	I0923 10:41:41.100809   51234 notify.go:220] Checking for updates...
	I0923 10:41:41.100815   51234 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 10:41:41.102787   51234 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 10:41:41.104965   51234 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 10:41:41.106565   51234 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 10:41:41.108121   51234 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 10:41:41.109659   51234 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 10:41:41.111395   51234 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:41:41.111874   51234 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 10:41:41.138046   51234 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 10:41:41.138152   51234 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:41:41.194772   51234 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:53 SystemTime:2024-09-23 10:41:41.186018278 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:41:41.194919   51234 docker.go:318] overlay module found
	I0923 10:41:41.197043   51234 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I0923 10:41:41.198455   51234 start.go:297] selected driver: docker
	I0923 10:41:41.198475   51234 start.go:901] validating driver "docker" against &{Name:functional-676470 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1726784731-19672@sha256:7f8c62ddb0100a5b958dd19c5b5478b8c7ef13da9a0a4d6c7d18f43544e0dbed Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.1 ClusterName:functional-676470 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.1 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0923 10:41:41.198606   51234 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 10:41:41.200858   51234 out.go:201] 
	W0923 10:41:41.202216   51234 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0923 10:41:41.203656   51234 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1629: (dbg) Run:  kubectl --context functional-676470 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-676470 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6l8z5" [dc4ba0d0-5b14-487f-af75-6eafe796fda3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-67bdd5bbb4-6l8z5" [dc4ba0d0-5b14-487f-af75-6eafe796fda3] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.003849715s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30279
functional_test.go:1675: http://192.168.49.2:30279: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-67bdd5bbb4-6l8z5

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30279
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (38.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [f43970e9-a3f6-4a15-85f3-e6bb7a792706] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003890019s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-676470 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-676470 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-676470 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-676470 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [300ffe5f-ed52-4f1d-a333-bd3e98a27e74] Pending
helpers_test.go:344: "sp-pod" [300ffe5f-ed52-4f1d-a333-bd3e98a27e74] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [300ffe5f-ed52-4f1d-a333-bd3e98a27e74] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.003457951s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-676470 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-676470 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-676470 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d183b40f-0733-4c58-a2b5-f9be22bc4508] Pending
helpers_test.go:344: "sp-pod" [d183b40f-0733-4c58-a2b5-f9be22bc4508] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d183b40f-0733-4c58-a2b5-f9be22bc4508] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 17.003575095s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-676470 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (38.49s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh -n functional-676470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cp functional-676470:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd529151015/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh -n functional-676470 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh -n functional-676470 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (22.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1793: (dbg) Run:  kubectl --context functional-676470 replace --force -f testdata/mysql.yaml
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-6cdb49bbb-r2j2d" [17417732-c673-4aea-aeff-71328570b227] Pending
helpers_test.go:344: "mysql-6cdb49bbb-r2j2d" [17417732-c673-4aea-aeff-71328570b227] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-6cdb49bbb-r2j2d" [17417732-c673-4aea-aeff-71328570b227] Running
functional_test.go:1799: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 20.003488655s
functional_test.go:1807: (dbg) Run:  kubectl --context functional-676470 exec mysql-6cdb49bbb-r2j2d -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-676470 exec mysql-6cdb49bbb-r2j2d -- mysql -ppassword -e "show databases;": exit status 1 (98.259073ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:42:07.712319   10562 retry.go:31] will retry after 587.974661ms: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-676470 exec mysql-6cdb49bbb-r2j2d -- mysql -ppassword -e "show databases;"
functional_test.go:1807: (dbg) Non-zero exit: kubectl --context functional-676470 exec mysql-6cdb49bbb-r2j2d -- mysql -ppassword -e "show databases;": exit status 1 (111.095934ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0923 10:42:08.412359   10562 retry.go:31] will retry after 1.098756935s: exit status 1
functional_test.go:1807: (dbg) Run:  kubectl --context functional-676470 exec mysql-6cdb49bbb-r2j2d -- mysql -ppassword -e "show databases;"
2024/09/23 10:42:13 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/MySQL (22.19s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/10562/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /etc/test/nested/copy/10562/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/10562.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /etc/ssl/certs/10562.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/10562.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /usr/share/ca-certificates/10562.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/105622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /etc/ssl/certs/105622.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/105622.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /usr/share/ca-certificates/105622.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.47s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-676470 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "sudo systemctl is-active docker": exit status 1 (280.98296ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "sudo systemctl is-active containerd": exit status 1 (267.838587ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1315: Took "331.210115ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1329: Took "48.298227ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (1.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls --format short --alsologtostderr
functional_test.go:261: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 image ls --format short --alsologtostderr: (1.06426347s)
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676470 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.1
registry.k8s.io/kube-proxy:v1.31.1
registry.k8s.io/kube-controller-manager:v1.31.1
registry.k8s.io/kube-apiserver:v1.31.1
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-676470
localhost/kicbase/echo-server:functional-676470
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20240813-c6f155d6
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676470 image ls --format short --alsologtostderr:
I0923 10:41:56.514951   56289 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:56.515065   56289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:56.515076   56289 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:56.515081   56289 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:56.515283   56289 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
I0923 10:41:56.515870   56289 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:56.515960   56289 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:56.516306   56289 cli_runner.go:164] Run: docker container inspect functional-676470 --format={{.State.Status}}
I0923 10:41:56.541939   56289 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:56.542011   56289 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676470
I0923 10:41:56.564717   56289 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/functional-676470/id_rsa Username:docker}
I0923 10:41:56.731049   56289 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (1.06s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676470 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/etcd                    | 3.5.15-0           | 2e96e5913fc06 | 149MB  |
| registry.k8s.io/kube-apiserver          | v1.31.1            | 6bab7719df100 | 95.2MB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | latest             | beae173ccac6a | 1.46MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| localhost/my-image                      | functional-676470  | a2e645deed51a | 1.47MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-proxy              | v1.31.1            | 60c005f310ff3 | 92.7MB |
| localhost/kicbase/echo-server           | functional-676470  | 9056ab77afb8e | 4.94MB |
| localhost/minikube-local-cache-test     | functional-676470  | 618ac42d7b4e5 | 3.33kB |
| registry.k8s.io/coredns/coredns         | v1.11.3            | c69fa2e9cbf5f | 63.3MB |
| registry.k8s.io/kube-controller-manager | v1.31.1            | 175ffd71cce3d | 89.4MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/kindest/kindnetd              | v20240813-c6f155d6 | 12968670680f4 | 87.2MB |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| docker.io/library/nginx                 | alpine             | c7b4f26a7d93f | 44.6MB |
| docker.io/library/nginx                 | latest             | 39286ab8a5e14 | 192MB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-scheduler          | v1.31.1            | 9aa1fad941575 | 68.4MB |
| registry.k8s.io/pause                   | 3.10               | 873ed75102791 | 742kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676470 image ls --format table --alsologtostderr:
I0923 10:42:03.718103   57238 out.go:345] Setting OutFile to fd 1 ...
I0923 10:42:03.718211   57238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:42:03.718219   57238 out.go:358] Setting ErrFile to fd 2...
I0923 10:42:03.718224   57238 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:42:03.718434   57238 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
I0923 10:42:03.719066   57238 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:42:03.719177   57238 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:42:03.719816   57238 cli_runner.go:164] Run: docker container inspect functional-676470 --format={{.State.Status}}
I0923 10:42:03.737189   57238 ssh_runner.go:195] Run: systemctl --version
I0923 10:42:03.737237   57238 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676470
I0923 10:42:03.754586   57238 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/functional-676470/id_rsa Username:docker}
I0923 10:42:03.841700   57238 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676470 image ls --format json --alsologtostderr:
[{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1462480"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"a2e645deed51a6731c029ae47afa5673500026b006080c8e4da77136a8ac17ed","repoDigests":["localhost/my-image@sha256:7d8e3dca50d7c93d13902ba72b24e990af0e6ff2af8e207bd971c9cc4f9c2198"],"repoTags":["localhost/my-image:functional-676470"],"size":"1468194"},{"id":"0
184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":["registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e","registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"63273227"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2
a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"72355defb203aed8a89e53c4357ca3363e58a40b9337b9ca1e8e26118a3b13a4","repoDigests":["docker.io/library/ff8f8d071cd23252db393250399b6a6f922e2961cc8212319459284bd2b3281a-tmp@sha256:fbc3453933c26ab9911f45f50c5a179dbc242bc165aeae5e831cf6e4b4c2ddfd"],"repoTags":[],"size":"1465612"},{"id":"39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3","repoDigests":["docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3","docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e"],"repoTags":["docker.io/library/nginx:latest"],"size":"191853369"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":["localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf"],"repoTags":["local
host/kicbase/echo-server:functional-676470"],"size":"4943877"},{"id":"618ac42d7b4e5a79f332e5e5f7db52419d964867b3d1cf00b3f77328781f3030","repoDigests":["localhost/minikube-local-cache-test@sha256:97c4039a7e3e6cfcf945f49801e3b2f3a48fae42326a54d62fda1659a651403e"],"repoTags":["localhost/minikube-local-cache-test:functional-676470"],"size":"3330"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4","repoDigests":["registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d","registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"149009664"},{"id":"c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9","repoD
igests":["docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56","docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf"],"repoTags":["docker.io/library/nginx:alpine"],"size":"44647101"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"},{"id":"6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee","repoDigests":["registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771","registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.1"],"size":"95237600"},{"id":"175ffd7
1cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1","registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.1"],"size":"89437508"},{"id":"60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561","repoDigests":["registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44","registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.1"],"size":"92733849"},{"id":"9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b","repoDigests":["registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0","registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7
b03eddaeac26864d8"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.1"],"size":"68420934"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":["registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"742080"},{"id":"12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f","repoDigests":["docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b","docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166"],"repoTags":["docker.io/kindest/kindnetd:v20240813-c6f155d
6"],"size":"87190579"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676470 image ls --format json --alsologtostderr:
I0923 10:42:03.509728   57186 out.go:345] Setting OutFile to fd 1 ...
I0923 10:42:03.509998   57186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:42:03.510009   57186 out.go:358] Setting ErrFile to fd 2...
I0923 10:42:03.510014   57186 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:42:03.510177   57186 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
I0923 10:42:03.510766   57186 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:42:03.510861   57186 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:42:03.511220   57186 cli_runner.go:164] Run: docker container inspect functional-676470 --format={{.State.Status}}
I0923 10:42:03.527736   57186 ssh_runner.go:195] Run: systemctl --version
I0923 10:42:03.527782   57186 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676470
I0923 10:42:03.544860   57186 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/functional-676470/id_rsa Username:docker}
I0923 10:42:03.634035   57186 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676470 image ls --format yaml --alsologtostderr:
- id: 9aa1fad941575eed91ab13d44f3e4cb5b1ff4e09cbbe954ea63002289416a13b
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:969a7e96340f3a927b3d652582edec2d6d82a083871d81ef5064b7edaab430d0
- registry.k8s.io/kube-scheduler@sha256:cb9d9404dddf0c6728b99a42d10d8ab1ece2a1c793ef1d7b03eddaeac26864d8
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.1
size: "68420934"
- id: c7b4f26a7d93f4f1f276c51adb03ef0df54a82de89f254a9aec5c18bf0e45ee9
repoDigests:
- docker.io/library/nginx@sha256:074604130336e3c431b7c6b5b551b5a6ae5b67db13b3d223c6db638f85c7ff56
- docker.io/library/nginx@sha256:a5127daff3d6f4606be3100a252419bfa84fd6ee5cd74d0feaca1a5068f97dcf
repoTags:
- docker.io/library/nginx:alpine
size: "44647101"
- id: 39286ab8a5e14aeaf5fdd6e2fac76e0c8d31a0c07224f0ee5e6be502f12e93f3
repoDigests:
- docker.io/library/nginx@sha256:04ba374043ccd2fc5c593885c0eacddebabd5ca375f9323666f28dfd5a9710e3
- docker.io/library/nginx@sha256:88a0a069d5e9865fcaaf8c1e53ba6bf3d8d987b0fdc5e0135fec8ce8567d673e
repoTags:
- docker.io/library/nginx:latest
size: "191853369"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 175ffd71cce3d90bae95904b55260db941b10007a4e5471a19f3135b30aa9cd1
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:9f9da5b27e03f89599cc40ba89150aebf3b4cff001e6db6d998674b34181e1a1
- registry.k8s.io/kube-controller-manager@sha256:e6c5253433f9032cff2bd9b1f41e29b9691a6d6ec97903896c0ca5f069a63748
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.1
size: "89437508"
- id: 12968670680f4561ef6818782391eb120d6e3365cf3f967aad58749f95381a4f
repoDigests:
- docker.io/kindest/kindnetd@sha256:7dd6b2417263c1bdd6840b33fb61c2d0038c044b91195135969b92effa15d56b
- docker.io/kindest/kindnetd@sha256:e59a687ca28ae274a2fc92f1e2f5f1c739f353178a43a23aafc71adb802ed166
repoTags:
- docker.io/kindest/kindnetd:v20240813-c6f155d6
size: "87190579"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests:
- localhost/kicbase/echo-server@sha256:d3d0b737c6413dcf7b9393d61285525048f2d10a0aae68296150078d379c30cf
repoTags:
- localhost/kicbase/echo-server:functional-676470
size: "4943877"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests:
- registry.k8s.io/pause@sha256:7c38f24774e3cbd906d2d33c38354ccf787635581c122965132c9bd309754d4a
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "742080"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 618ac42d7b4e5a79f332e5e5f7db52419d964867b3d1cf00b3f77328781f3030
repoDigests:
- localhost/minikube-local-cache-test@sha256:97c4039a7e3e6cfcf945f49801e3b2f3a48fae42326a54d62fda1659a651403e
repoTags:
- localhost/minikube-local-cache-test:functional-676470
size: "3330"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
- registry.k8s.io/coredns/coredns@sha256:f0b8c589314ed010a0c326e987a52b50801f0145ac9b75423af1b5c66dbd6d50
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "63273227"
- id: 60c005f310ff3ad6d131805170f07d2946095307063eaaa5eedcaf06a0a89561
repoDigests:
- registry.k8s.io/kube-proxy@sha256:4ee50b00484d7f39a90fc4cda92251177ef5ad8fdf2f2a0c768f9e634b4c6d44
- registry.k8s.io/kube-proxy@sha256:bb26bcf4490a4653ecb77ceb883c0fd8dd876f104f776aa0a6cbf9df68b16af2
repoTags:
- registry.k8s.io/kube-proxy:v1.31.1
size: "92733849"
- id: 2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4
repoDigests:
- registry.k8s.io/etcd@sha256:4e535f53f767fe400c2deec37fef7a6ab19a79a1db35041d067597641cd8b89d
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "149009664"
- id: 6bab7719df1001fdcc7e39f1decfa1f73b7f3af2757a91c5bafa1aaea29d1aee
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:1f30d71692d2ab71ce2c1dd5fab86e0cb00ce888d21de18806f5482021d18771
- registry.k8s.io/kube-apiserver@sha256:2409c23dbb5a2b7a81adbb184d3eac43ac653e9b97a7c0ee121b89bb3ef61fdb
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.1
size: "95237600"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676470 image ls --format yaml --alsologtostderr:
I0923 10:41:57.585883   56428 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:57.586014   56428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:57.586026   56428 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:57.586034   56428 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:57.586227   56428 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
I0923 10:41:57.587054   56428 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:57.587220   56428 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:57.587789   56428 cli_runner.go:164] Run: docker container inspect functional-676470 --format={{.State.Status}}
I0923 10:41:57.613622   56428 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:57.613669   56428 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676470
I0923 10:41:57.632117   56428 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/functional-676470/id_rsa Username:docker}
I0923 10:41:57.726004   56428 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (5.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh pgrep buildkitd: exit status 1 (246.482422ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image build -t localhost/my-image:functional-676470 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 image build -t localhost/my-image:functional-676470 testdata/build --alsologtostderr: (5.252042015s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-amd64 -p functional-676470 image build -t localhost/my-image:functional-676470 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 72355defb20
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-676470
--> a2e645deed5
Successfully tagged localhost/my-image:functional-676470
a2e645deed51a6731c029ae47afa5673500026b006080c8e4da77136a8ac17ed
functional_test.go:323: (dbg) Stderr: out/minikube-linux-amd64 -p functional-676470 image build -t localhost/my-image:functional-676470 testdata/build --alsologtostderr:
I0923 10:41:58.057266   56587 out.go:345] Setting OutFile to fd 1 ...
I0923 10:41:58.057768   56587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:58.057784   56587 out.go:358] Setting ErrFile to fd 2...
I0923 10:41:58.057791   56587 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0923 10:41:58.058188   56587 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
I0923 10:41:58.059660   56587 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:58.060210   56587 config.go:182] Loaded profile config "functional-676470": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
I0923 10:41:58.060632   56587 cli_runner.go:164] Run: docker container inspect functional-676470 --format={{.State.Status}}
I0923 10:41:58.078722   56587 ssh_runner.go:195] Run: systemctl --version
I0923 10:41:58.078773   56587 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-676470
I0923 10:41:58.095186   56587 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32778 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/functional-676470/id_rsa Username:docker}
I0923 10:41:58.189821   56587 build_images.go:161] Building image from path: /tmp/build.934843013.tar
I0923 10:41:58.189899   56587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0923 10:41:58.198541   56587 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.934843013.tar
I0923 10:41:58.201753   56587 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.934843013.tar: stat -c "%s %y" /var/lib/minikube/build/build.934843013.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.934843013.tar': No such file or directory
I0923 10:41:58.201792   56587 ssh_runner.go:362] scp /tmp/build.934843013.tar --> /var/lib/minikube/build/build.934843013.tar (3072 bytes)
I0923 10:41:58.240054   56587 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.934843013
I0923 10:41:58.248536   56587 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.934843013 -xf /var/lib/minikube/build/build.934843013.tar
I0923 10:41:58.257684   56587 crio.go:315] Building image: /var/lib/minikube/build/build.934843013
I0923 10:41:58.257785   56587 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-676470 /var/lib/minikube/build/build.934843013 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0923 10:42:03.245375   56587 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-676470 /var/lib/minikube/build/build.934843013 --cgroup-manager=cgroupfs: (4.98756482s)
I0923 10:42:03.245445   56587 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.934843013
I0923 10:42:03.253791   56587 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.934843013.tar
I0923 10:42:03.261571   56587 build_images.go:217] Built localhost/my-image:functional-676470 from /tmp/build.934843013.tar
I0923 10:42:03.261598   56587 build_images.go:133] succeeded building to: functional-676470
I0923 10:42:03.261603   56587 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (5.70s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:342: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.106854385s)
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-676470
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1366: Took "325.135512ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1379: Took "48.47275ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 49483: os: process already finished
helpers_test.go:502: unable to terminate pid 49231: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-676470 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [ed3a9641-b993-4b0b-9872-d82ac7ac50a4] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [ed3a9641-b993-4b0b-9872-d82ac7ac50a4] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 14.004232418s
I0923 10:41:44.741839   10562 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (14.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image load --daemon kicbase/echo-server:functional-676470 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 image load --daemon kicbase/echo-server:functional-676470 --alsologtostderr: (1.996134176s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (2.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image load --daemon kicbase/echo-server:functional-676470 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.85s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-676470
functional_test.go:245: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image load --daemon kicbase/echo-server:functional-676470 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
E0923 10:41:36.640614   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.77s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image save kicbase/echo-server:functional-676470 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:380: (dbg) Done: out/minikube-linux-amd64 -p functional-676470 image save kicbase/echo-server:functional-676470 /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr: (1.189370133s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image rm kicbase/echo-server:functional-676470 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image load /home/jenkins/workspace/Docker_Linux_crio_integration/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.73s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-676470
functional_test.go:424: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 image save --daemon kicbase/echo-server:functional-676470 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-676470
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1439: (dbg) Run:  kubectl --context functional-676470 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-676470 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6b9f76b5c7-mpfp9" [82862a34-7251-4b31-8188-c4f47bc7bae0] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6b9f76b5c7-mpfp9" [82862a34-7251-4b31-8188-c4f47bc7bae0] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 7.004292456s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (7.14s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdany-port3374929296/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1727088101208680240" to /tmp/TestFunctionalparallelMountCmdany-port3374929296/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1727088101208680240" to /tmp/TestFunctionalparallelMountCmdany-port3374929296/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1727088101208680240" to /tmp/TestFunctionalparallelMountCmdany-port3374929296/001/test-1727088101208680240
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (328.244383ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:41:41.537264   10562 retry.go:31] will retry after 287.458603ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Sep 23 10:41 created-by-test
-rw-r--r-- 1 docker docker 24 Sep 23 10:41 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Sep 23 10:41 test-1727088101208680240
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh cat /mount-9p/test-1727088101208680240
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-676470 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [f7cfb235-1aac-4060-8cbc-4dc0ecf4ca8c] Pending
helpers_test.go:344: "busybox-mount" [f7cfb235-1aac-4060-8cbc-4dc0ecf4ca8c] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [f7cfb235-1aac-4060-8cbc-4dc0ecf4ca8c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [f7cfb235-1aac-4060-8cbc-4dc0ecf4ca8c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 7.00331762s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-676470 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdany-port3374929296/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.68s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-676470 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.111.31.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-amd64 -p functional-676470 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service list -o json
functional_test.go:1494: Took "483.493804ms" to run "out/minikube-linux-amd64 -p functional-676470 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.48s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31113
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31113
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdspecific-port379772981/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (347.91113ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:41:51.240054   10562 retry.go:31] will retry after 705.355802ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdspecific-port379772981/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "sudo umount -f /mount-9p": exit status 1 (391.21162ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-676470 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdspecific-port379772981/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.46s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T" /mount1: exit status 1 (582.045482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I0923 10:41:53.936702   10562 retry.go:31] will retry after 255.515744ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-676470 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-676470 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-676470 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2505911833/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.25s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-676470
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-676470
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-676470
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (151.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-481206 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 10:42:58.562062   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-481206 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m31.238155677s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (151.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (7.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-481206 -- rollout status deployment/busybox: (5.944674264s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-bxb9b -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-mqgks -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-tcq7l -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-bxb9b -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-mqgks -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-tcq7l -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-bxb9b -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-mqgks -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-tcq7l -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (7.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-bxb9b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-bxb9b -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-mqgks -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-mqgks -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-tcq7l -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-481206 -- exec busybox-7dff88458-tcq7l -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (32.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-481206 -v=7 --alsologtostderr
E0923 10:45:14.699519   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-481206 -v=7 --alsologtostderr: (32.024986862s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (32.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-481206 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.06s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (15.4s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp testdata/cp-test.txt ha-481206:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2685930382/001/cp-test_ha-481206.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206:/home/docker/cp-test.txt ha-481206-m02:/home/docker/cp-test_ha-481206_ha-481206-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test_ha-481206_ha-481206-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206:/home/docker/cp-test.txt ha-481206-m03:/home/docker/cp-test_ha-481206_ha-481206-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test_ha-481206_ha-481206-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206:/home/docker/cp-test.txt ha-481206-m04:/home/docker/cp-test_ha-481206_ha-481206-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test_ha-481206_ha-481206-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp testdata/cp-test.txt ha-481206-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2685930382/001/cp-test_ha-481206-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m02:/home/docker/cp-test.txt ha-481206:/home/docker/cp-test_ha-481206-m02_ha-481206.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test_ha-481206-m02_ha-481206.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m02:/home/docker/cp-test.txt ha-481206-m03:/home/docker/cp-test_ha-481206-m02_ha-481206-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test_ha-481206-m02_ha-481206-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m02:/home/docker/cp-test.txt ha-481206-m04:/home/docker/cp-test_ha-481206-m02_ha-481206-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test_ha-481206-m02_ha-481206-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp testdata/cp-test.txt ha-481206-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2685930382/001/cp-test_ha-481206-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m03:/home/docker/cp-test.txt ha-481206:/home/docker/cp-test_ha-481206-m03_ha-481206.txt
E0923 10:45:42.403736   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test_ha-481206-m03_ha-481206.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m03:/home/docker/cp-test.txt ha-481206-m02:/home/docker/cp-test_ha-481206-m03_ha-481206-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test_ha-481206-m03_ha-481206-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m03:/home/docker/cp-test.txt ha-481206-m04:/home/docker/cp-test_ha-481206-m03_ha-481206-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test_ha-481206-m03_ha-481206-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp testdata/cp-test.txt ha-481206-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2685930382/001/cp-test_ha-481206-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m04:/home/docker/cp-test.txt ha-481206:/home/docker/cp-test_ha-481206-m04_ha-481206.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206 "sudo cat /home/docker/cp-test_ha-481206-m04_ha-481206.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m04:/home/docker/cp-test.txt ha-481206-m02:/home/docker/cp-test_ha-481206-m04_ha-481206-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m02 "sudo cat /home/docker/cp-test_ha-481206-m04_ha-481206-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 cp ha-481206-m04:/home/docker/cp-test.txt ha-481206-m03:/home/docker/cp-test_ha-481206-m04_ha-481206-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 ssh -n ha-481206-m03 "sudo cat /home/docker/cp-test_ha-481206-m04_ha-481206-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (15.40s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-linux-amd64 -p ha-481206 node stop m02 -v=7 --alsologtostderr: (11.804639715s)
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr: exit status 7 (646.728806ms)

                                                
                                                
-- stdout --
	ha-481206
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481206-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481206-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-481206-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:46:00.298479   78641 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:46:00.298579   78641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:00.298586   78641 out.go:358] Setting ErrFile to fd 2...
	I0923 10:46:00.298590   78641 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:46:00.298784   78641 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:46:00.298997   78641 out.go:352] Setting JSON to false
	I0923 10:46:00.299029   78641 mustload.go:65] Loading cluster: ha-481206
	I0923 10:46:00.299141   78641 notify.go:220] Checking for updates...
	I0923 10:46:00.299425   78641 config.go:182] Loaded profile config "ha-481206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:46:00.299446   78641 status.go:174] checking status of ha-481206 ...
	I0923 10:46:00.299943   78641 cli_runner.go:164] Run: docker container inspect ha-481206 --format={{.State.Status}}
	I0923 10:46:00.317680   78641 status.go:364] ha-481206 host status = "Running" (err=<nil>)
	I0923 10:46:00.317717   78641 host.go:66] Checking if "ha-481206" exists ...
	I0923 10:46:00.318005   78641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481206
	I0923 10:46:00.336215   78641 host.go:66] Checking if "ha-481206" exists ...
	I0923 10:46:00.336566   78641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:46:00.336639   78641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481206
	I0923 10:46:00.355346   78641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32783 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/ha-481206/id_rsa Username:docker}
	I0923 10:46:00.450641   78641 ssh_runner.go:195] Run: systemctl --version
	I0923 10:46:00.454517   78641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:46:00.464804   78641 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 10:46:00.512808   78641 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:50 OomKillDisable:true NGoroutines:72 SystemTime:2024-09-23 10:46:00.503338109 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 10:46:00.513335   78641 kubeconfig.go:125] found "ha-481206" server: "https://192.168.49.254:8443"
	I0923 10:46:00.513364   78641 api_server.go:166] Checking apiserver status ...
	I0923 10:46:00.513406   78641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:46:00.523569   78641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup
	I0923 10:46:00.531942   78641 api_server.go:182] apiserver freezer: "3:freezer:/docker/1d62eaf928548eeabf38c9ca047e23d571989d270621e317f1e954f02684285e/crio/crio-5edaf5c986ac0b90a0a6ff484d7d014521113ca1c187e996cc3580e7348a541c"
	I0923 10:46:00.531990   78641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/1d62eaf928548eeabf38c9ca047e23d571989d270621e317f1e954f02684285e/crio/crio-5edaf5c986ac0b90a0a6ff484d7d014521113ca1c187e996cc3580e7348a541c/freezer.state
	I0923 10:46:00.539594   78641 api_server.go:204] freezer state: "THAWED"
	I0923 10:46:00.539624   78641 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:46:00.544375   78641 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:46:00.544403   78641 status.go:456] ha-481206 apiserver status = Running (err=<nil>)
	I0923 10:46:00.544415   78641 status.go:176] ha-481206 status: &{Name:ha-481206 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:46:00.544431   78641 status.go:174] checking status of ha-481206-m02 ...
	I0923 10:46:00.544662   78641 cli_runner.go:164] Run: docker container inspect ha-481206-m02 --format={{.State.Status}}
	I0923 10:46:00.561565   78641 status.go:364] ha-481206-m02 host status = "Stopped" (err=<nil>)
	I0923 10:46:00.561590   78641 status.go:377] host is not running, skipping remaining checks
	I0923 10:46:00.561596   78641 status.go:176] ha-481206-m02 status: &{Name:ha-481206-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:46:00.561617   78641 status.go:174] checking status of ha-481206-m03 ...
	I0923 10:46:00.561886   78641 cli_runner.go:164] Run: docker container inspect ha-481206-m03 --format={{.State.Status}}
	I0923 10:46:00.578507   78641 status.go:364] ha-481206-m03 host status = "Running" (err=<nil>)
	I0923 10:46:00.578530   78641 host.go:66] Checking if "ha-481206-m03" exists ...
	I0923 10:46:00.578777   78641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481206-m03
	I0923 10:46:00.595637   78641 host.go:66] Checking if "ha-481206-m03" exists ...
	I0923 10:46:00.595933   78641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:46:00.595979   78641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481206-m03
	I0923 10:46:00.612485   78641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32793 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/ha-481206-m03/id_rsa Username:docker}
	I0923 10:46:00.706786   78641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:46:00.718126   78641 kubeconfig.go:125] found "ha-481206" server: "https://192.168.49.254:8443"
	I0923 10:46:00.718153   78641 api_server.go:166] Checking apiserver status ...
	I0923 10:46:00.718189   78641 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 10:46:00.728190   78641 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1426/cgroup
	I0923 10:46:00.736824   78641 api_server.go:182] apiserver freezer: "3:freezer:/docker/f3d12c9c25eae153272ee5f5f5f7b4d1e0cbaadea6624d48b451a3963ef972ee/crio/crio-6378dcded92b95e94b616d9227fffa40909489127915ebbef8de25d87b251ceb"
	I0923 10:46:00.736881   78641 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/f3d12c9c25eae153272ee5f5f5f7b4d1e0cbaadea6624d48b451a3963ef972ee/crio/crio-6378dcded92b95e94b616d9227fffa40909489127915ebbef8de25d87b251ceb/freezer.state
	I0923 10:46:00.744691   78641 api_server.go:204] freezer state: "THAWED"
	I0923 10:46:00.744716   78641 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I0923 10:46:00.748379   78641 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I0923 10:46:00.748404   78641 status.go:456] ha-481206-m03 apiserver status = Running (err=<nil>)
	I0923 10:46:00.748412   78641 status.go:176] ha-481206-m03 status: &{Name:ha-481206-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:46:00.748427   78641 status.go:174] checking status of ha-481206-m04 ...
	I0923 10:46:00.748657   78641 cli_runner.go:164] Run: docker container inspect ha-481206-m04 --format={{.State.Status}}
	I0923 10:46:00.765933   78641 status.go:364] ha-481206-m04 host status = "Running" (err=<nil>)
	I0923 10:46:00.765955   78641 host.go:66] Checking if "ha-481206-m04" exists ...
	I0923 10:46:00.766198   78641 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-481206-m04
	I0923 10:46:00.782957   78641 host.go:66] Checking if "ha-481206-m04" exists ...
	I0923 10:46:00.783221   78641 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 10:46:00.783257   78641 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-481206-m04
	I0923 10:46:00.800378   78641 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32798 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/ha-481206-m04/id_rsa Username:docker}
	I0923 10:46:00.890204   78641 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 10:46:00.900736   78641 status.go:176] ha-481206-m04 status: &{Name:ha-481206-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (33.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 node start m02 -v=7 --alsologtostderr
E0923 10:46:30.164522   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.170885   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.182249   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.203615   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.245012   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.326568   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.488448   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:30.810095   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:31.451505   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:32.733724   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:420: (dbg) Done: out/minikube-linux-amd64 -p ha-481206 node start m02 -v=7 --alsologtostderr: (32.437409971s)
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (33.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0923 10:46:35.294980   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (1.00240427s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-481206 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-481206 -v=7 --alsologtostderr
E0923 10:46:40.416792   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:46:50.658478   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:47:11.140185   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:462: (dbg) Done: out/minikube-linux-amd64 stop -p ha-481206 -v=7 --alsologtostderr: (36.65973191s)
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-481206 --wait=true -v=7 --alsologtostderr
E0923 10:47:52.101588   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:49:14.022979   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-481206 --wait=true -v=7 --alsologtostderr: (2m41.476691608s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-481206
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (198.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-481206 node delete m03 -v=7 --alsologtostderr: (10.519220521s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (11.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.66s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 stop -v=7 --alsologtostderr
E0923 10:50:14.701711   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:531: (dbg) Done: out/minikube-linux-amd64 -p ha-481206 stop -v=7 --alsologtostderr: (35.433605378s)
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr: exit status 7 (98.105843ms)

                                                
                                                
-- stdout --
	ha-481206
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481206-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-481206-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 10:50:41.652886   96454 out.go:345] Setting OutFile to fd 1 ...
	I0923 10:50:41.653000   96454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:50:41.653011   96454 out.go:358] Setting ErrFile to fd 2...
	I0923 10:50:41.653018   96454 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 10:50:41.653226   96454 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 10:50:41.653453   96454 out.go:352] Setting JSON to false
	I0923 10:50:41.653487   96454 mustload.go:65] Loading cluster: ha-481206
	I0923 10:50:41.653591   96454 notify.go:220] Checking for updates...
	I0923 10:50:41.653998   96454 config.go:182] Loaded profile config "ha-481206": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 10:50:41.654022   96454 status.go:174] checking status of ha-481206 ...
	I0923 10:50:41.654459   96454 cli_runner.go:164] Run: docker container inspect ha-481206 --format={{.State.Status}}
	I0923 10:50:41.672988   96454 status.go:364] ha-481206 host status = "Stopped" (err=<nil>)
	I0923 10:50:41.673020   96454 status.go:377] host is not running, skipping remaining checks
	I0923 10:50:41.673027   96454 status.go:176] ha-481206 status: &{Name:ha-481206 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:50:41.673053   96454 status.go:174] checking status of ha-481206-m02 ...
	I0923 10:50:41.673394   96454 cli_runner.go:164] Run: docker container inspect ha-481206-m02 --format={{.State.Status}}
	I0923 10:50:41.691329   96454 status.go:364] ha-481206-m02 host status = "Stopped" (err=<nil>)
	I0923 10:50:41.691356   96454 status.go:377] host is not running, skipping remaining checks
	I0923 10:50:41.691363   96454 status.go:176] ha-481206-m02 status: &{Name:ha-481206-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 10:50:41.691391   96454 status.go:174] checking status of ha-481206-m04 ...
	I0923 10:50:41.691693   96454 cli_runner.go:164] Run: docker container inspect ha-481206-m04 --format={{.State.Status}}
	I0923 10:50:41.708874   96454 status.go:364] ha-481206-m04 host status = "Stopped" (err=<nil>)
	I0923 10:50:41.708898   96454 status.go:377] host is not running, skipping remaining checks
	I0923 10:50:41.708903   96454 status.go:176] ha-481206-m04 status: &{Name:ha-481206-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.53s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (95.02s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-481206 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E0923 10:51:30.163766   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:51:57.864844   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-481206 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m34.264178733s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (95.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.65s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (66.32s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-481206 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-481206 --control-plane -v=7 --alsologtostderr: (1m5.493735479s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-481206 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (66.32s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.84s)

                                                
                                    
x
+
TestJSONOutput/start/Command (70.91s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-287924 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-287924 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (1m10.911992492s)
--- PASS: TestJSONOutput/start/Command (70.91s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.63s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-287924 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.63s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.57s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-287924 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.57s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.73s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-287924 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-287924 --output=json --user=testUser: (5.732406396s)
--- PASS: TestJSONOutput/stop/Command (5.73s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.2s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-883009 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-883009 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (62.997369ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"e5a6f9fb-0149-4949-9920-30df3b682678","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-883009] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"c7125ebd-4136-42d3-8b0a-fef929cbcd13","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"633a04bd-a06a-431e-9f5b-6cd8a7ea050f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"4117d5e0-5436-41ce-87e9-d939c5178691","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig"}}
	{"specversion":"1.0","id":"163d10de-2800-43ce-9b8d-7d5490790a4e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube"}}
	{"specversion":"1.0","id":"bca45740-c1e7-4b0b-b75b-6a7944d1d4da","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"00c1edc9-2da5-49c5-ba86-073981876980","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"89673ea1-d2e2-4ff1-b2b8-6c14a2e4cbd6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-883009" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-883009
--- PASS: TestErrorJSONOutput (0.20s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (35.7s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-461559 --network=
E0923 10:55:14.699847   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-461559 --network=: (33.648760889s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-461559" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-461559
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-461559: (2.037179616s)
--- PASS: TestKicCustomNetwork/create_custom_network (35.70s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (25.73s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-amd64 start -p docker-network-420155 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-amd64 start -p docker-network-420155 --network=bridge: (23.897417655s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-420155" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p docker-network-420155
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p docker-network-420155: (1.812799586s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (25.73s)

                                                
                                    
x
+
TestKicExistingNetwork (26.08s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I0923 10:55:55.221775   10562 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W0923 10:55:55.238389   10562 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I0923 10:55:55.238463   10562 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I0923 10:55:55.238482   10562 cli_runner.go:164] Run: docker network inspect existing-network
W0923 10:55:55.253941   10562 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I0923 10:55:55.253968   10562 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I0923 10:55:55.253985   10562 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I0923 10:55:55.254100   10562 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I0923 10:55:55.270259   10562 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-4381ce043a4e IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:84:38:d5:57} reservation:<nil>}
I0923 10:55:55.270763   10562 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc001457800}
I0923 10:55:55.270788   10562 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I0923 10:55:55.270830   10562 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I0923 10:55:55.333414   10562 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-amd64 start -p existing-network-441255 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-amd64 start -p existing-network-441255 --network=existing-network: (24.060246239s)
helpers_test.go:175: Cleaning up "existing-network-441255" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p existing-network-441255
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p existing-network-441255: (1.873680707s)
I0923 10:56:21.284607   10562 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (26.08s)

                                                
                                    
x
+
TestKicCustomSubnet (26.79s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-subnet-598124 --subnet=192.168.60.0/24
E0923 10:56:30.165652   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 10:56:37.765110   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-subnet-598124 --subnet=192.168.60.0/24: (24.800642477s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-598124 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-598124" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p custom-subnet-598124
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p custom-subnet-598124: (1.969176589s)
--- PASS: TestKicCustomSubnet (26.79s)

                                                
                                    
x
+
TestKicStaticIP (26.15s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-amd64 start -p static-ip-841959 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-amd64 start -p static-ip-841959 --static-ip=192.168.200.200: (24.408773909s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-amd64 -p static-ip-841959 ip
helpers_test.go:175: Cleaning up "static-ip-841959" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p static-ip-841959
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p static-ip-841959: (1.624369894s)
--- PASS: TestKicStaticIP (26.15s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (49.23s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-043069 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-043069 --driver=docker  --container-runtime=crio: (20.505160928s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-053560 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-053560 --driver=docker  --container-runtime=crio: (23.644875174s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-043069
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-053560
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-053560" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-053560
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-053560: (1.806686206s)
helpers_test.go:175: Cleaning up "first-043069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-043069
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p first-043069: (2.173551735s)
--- PASS: TestMinikubeProfile (49.23s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.08s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-639692 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-639692 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.078123475s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.08s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-639692 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (6.22s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-651159 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-651159 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (5.2178466s)
--- PASS: TestMountStart/serial/StartWithMountSecond (6.22s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651159 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.24s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-639692 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p mount-start-1-639692 --alsologtostderr -v=5: (1.611691857s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651159 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.24s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.17s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-651159
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-651159: (1.169660329s)
--- PASS: TestMountStart/serial/Stop (1.17s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.88s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-651159
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-651159: (6.876315569s)
--- PASS: TestMountStart/serial/RestartStopped (7.88s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-651159 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.24s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (68.54s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603406 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603406 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m8.100807001s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (68.54s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-603406 -- rollout status deployment/busybox: (4.111583605s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-7xjhp -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-kjw5h -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-7xjhp -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-kjw5h -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-7xjhp -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-kjw5h -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (5.43s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-7xjhp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-7xjhp -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-kjw5h -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-603406 -- exec busybox-7dff88458-kjw5h -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.68s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (54.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-603406 -v 3 --alsologtostderr
E0923 11:00:14.699511   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-603406 -v 3 --alsologtostderr: (54.243102072s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (54.85s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-603406 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.07s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.62s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (8.93s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp testdata/cp-test.txt multinode-603406:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1987841504/001/cp-test_multinode-603406.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406:/home/docker/cp-test.txt multinode-603406-m02:/home/docker/cp-test_multinode-603406_multinode-603406-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test_multinode-603406_multinode-603406-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406:/home/docker/cp-test.txt multinode-603406-m03:/home/docker/cp-test_multinode-603406_multinode-603406-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test_multinode-603406_multinode-603406-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp testdata/cp-test.txt multinode-603406-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1987841504/001/cp-test_multinode-603406-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m02:/home/docker/cp-test.txt multinode-603406:/home/docker/cp-test_multinode-603406-m02_multinode-603406.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test_multinode-603406-m02_multinode-603406.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m02:/home/docker/cp-test.txt multinode-603406-m03:/home/docker/cp-test_multinode-603406-m02_multinode-603406-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test_multinode-603406-m02_multinode-603406-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp testdata/cp-test.txt multinode-603406-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1987841504/001/cp-test_multinode-603406-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m03:/home/docker/cp-test.txt multinode-603406:/home/docker/cp-test_multinode-603406-m03_multinode-603406.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406 "sudo cat /home/docker/cp-test_multinode-603406-m03_multinode-603406.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 cp multinode-603406-m03:/home/docker/cp-test.txt multinode-603406-m02:/home/docker/cp-test_multinode-603406-m03_multinode-603406-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 ssh -n multinode-603406-m02 "sudo cat /home/docker/cp-test_multinode-603406-m03_multinode-603406-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (8.93s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-603406 node stop m03: (1.175857119s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603406 status: exit status 7 (463.441053ms)

                                                
                                                
-- stdout --
	multinode-603406
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603406-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603406-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr: exit status 7 (455.900086ms)

                                                
                                                
-- stdout --
	multinode-603406
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-603406-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-603406-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:00:52.984594  161789 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:00:52.984882  161789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:52.984892  161789 out.go:358] Setting ErrFile to fd 2...
	I0923 11:00:52.984896  161789 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:00:52.985073  161789 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 11:00:52.985235  161789 out.go:352] Setting JSON to false
	I0923 11:00:52.985265  161789 mustload.go:65] Loading cluster: multinode-603406
	I0923 11:00:52.985390  161789 notify.go:220] Checking for updates...
	I0923 11:00:52.985815  161789 config.go:182] Loaded profile config "multinode-603406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:00:52.985840  161789 status.go:174] checking status of multinode-603406 ...
	I0923 11:00:52.986347  161789 cli_runner.go:164] Run: docker container inspect multinode-603406 --format={{.State.Status}}
	I0923 11:00:53.004595  161789 status.go:364] multinode-603406 host status = "Running" (err=<nil>)
	I0923 11:00:53.004636  161789 host.go:66] Checking if "multinode-603406" exists ...
	I0923 11:00:53.004984  161789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-603406
	I0923 11:00:53.022265  161789 host.go:66] Checking if "multinode-603406" exists ...
	I0923 11:00:53.022529  161789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:00:53.022572  161789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-603406
	I0923 11:00:53.039400  161789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32903 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/multinode-603406/id_rsa Username:docker}
	I0923 11:00:53.130678  161789 ssh_runner.go:195] Run: systemctl --version
	I0923 11:00:53.134786  161789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:00:53.145671  161789 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:00:53.192506  161789 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:62 SystemTime:2024-09-23 11:00:53.183215171 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 11:00:53.193182  161789 kubeconfig.go:125] found "multinode-603406" server: "https://192.168.67.2:8443"
	I0923 11:00:53.193218  161789 api_server.go:166] Checking apiserver status ...
	I0923 11:00:53.193267  161789 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0923 11:00:53.203927  161789 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1507/cgroup
	I0923 11:00:53.212957  161789 api_server.go:182] apiserver freezer: "3:freezer:/docker/05ba5cf6bc952472a48b0f3753032fee3ca8914ce5df9c74e7d7d8290ebfd92f/crio/crio-40b8e714e88d7d08b7ae4f2ada648e0d5f2dda229ae2b4e4bed98cb851d852b5"
	I0923 11:00:53.213014  161789 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/05ba5cf6bc952472a48b0f3753032fee3ca8914ce5df9c74e7d7d8290ebfd92f/crio/crio-40b8e714e88d7d08b7ae4f2ada648e0d5f2dda229ae2b4e4bed98cb851d852b5/freezer.state
	I0923 11:00:53.220858  161789 api_server.go:204] freezer state: "THAWED"
	I0923 11:00:53.220889  161789 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I0923 11:00:53.225554  161789 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I0923 11:00:53.225581  161789 status.go:456] multinode-603406 apiserver status = Running (err=<nil>)
	I0923 11:00:53.225592  161789 status.go:176] multinode-603406 status: &{Name:multinode-603406 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:00:53.225619  161789 status.go:174] checking status of multinode-603406-m02 ...
	I0923 11:00:53.225906  161789 cli_runner.go:164] Run: docker container inspect multinode-603406-m02 --format={{.State.Status}}
	I0923 11:00:53.242614  161789 status.go:364] multinode-603406-m02 host status = "Running" (err=<nil>)
	I0923 11:00:53.242642  161789 host.go:66] Checking if "multinode-603406-m02" exists ...
	I0923 11:00:53.242879  161789 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-603406-m02
	I0923 11:00:53.260410  161789 host.go:66] Checking if "multinode-603406-m02" exists ...
	I0923 11:00:53.260674  161789 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0923 11:00:53.260713  161789 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-603406-m02
	I0923 11:00:53.279799  161789 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:32908 SSHKeyPath:/home/jenkins/minikube-integration/19689-3772/.minikube/machines/multinode-603406-m02/id_rsa Username:docker}
	I0923 11:00:53.370415  161789 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0923 11:00:53.380900  161789 status.go:176] multinode-603406-m02 status: &{Name:multinode-603406-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:00:53.380942  161789 status.go:174] checking status of multinode-603406-m03 ...
	I0923 11:00:53.381254  161789 cli_runner.go:164] Run: docker container inspect multinode-603406-m03 --format={{.State.Status}}
	I0923 11:00:53.398109  161789 status.go:364] multinode-603406-m03 host status = "Stopped" (err=<nil>)
	I0923 11:00:53.398131  161789 status.go:377] host is not running, skipping remaining checks
	I0923 11:00:53.398137  161789 status.go:176] multinode-603406-m03 status: &{Name:multinode-603406-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.10s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.14s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-603406 node start m03 -v=7 --alsologtostderr: (8.487409824s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.14s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (110.72s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603406
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-603406
multinode_test.go:321: (dbg) Done: out/minikube-linux-amd64 stop -p multinode-603406: (24.656284133s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603406 --wait=true -v=8 --alsologtostderr
E0923 11:01:30.164996   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603406 --wait=true -v=8 --alsologtostderr: (1m25.973643841s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603406
E0923 11:02:53.226806   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiNode/serial/RestartKeepsNodes (110.72s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-603406 node delete m03: (4.651175055s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-amd64 -p multinode-603406 stop: (23.532125331s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603406 status: exit status 7 (81.249705ms)

                                                
                                                
-- stdout --
	multinode-603406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603406-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr: exit status 7 (81.960267ms)

                                                
                                                
-- stdout --
	multinode-603406
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-603406-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:03:22.123961  171503 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:03:22.124095  171503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:03:22.124106  171503 out.go:358] Setting ErrFile to fd 2...
	I0923 11:03:22.124113  171503 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:03:22.124297  171503 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 11:03:22.124470  171503 out.go:352] Setting JSON to false
	I0923 11:03:22.124502  171503 mustload.go:65] Loading cluster: multinode-603406
	I0923 11:03:22.124621  171503 notify.go:220] Checking for updates...
	I0923 11:03:22.124911  171503 config.go:182] Loaded profile config "multinode-603406": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:03:22.124929  171503 status.go:174] checking status of multinode-603406 ...
	I0923 11:03:22.125320  171503 cli_runner.go:164] Run: docker container inspect multinode-603406 --format={{.State.Status}}
	I0923 11:03:22.145595  171503 status.go:364] multinode-603406 host status = "Stopped" (err=<nil>)
	I0923 11:03:22.145621  171503 status.go:377] host is not running, skipping remaining checks
	I0923 11:03:22.145629  171503 status.go:176] multinode-603406 status: &{Name:multinode-603406 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0923 11:03:22.145652  171503 status.go:174] checking status of multinode-603406-m02 ...
	I0923 11:03:22.145898  171503 cli_runner.go:164] Run: docker container inspect multinode-603406-m02 --format={{.State.Status}}
	I0923 11:03:22.162819  171503 status.go:364] multinode-603406-m02 host status = "Stopped" (err=<nil>)
	I0923 11:03:22.162860  171503 status.go:377] host is not running, skipping remaining checks
	I0923 11:03:22.162876  171503 status.go:176] multinode-603406-m02 status: &{Name:multinode-603406-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (54.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603406 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603406 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (53.675961399s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-603406 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (54.23s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (25.57s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-603406
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603406-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-603406-m02 --driver=docker  --container-runtime=crio: exit status 14 (63.138737ms)

                                                
                                                
-- stdout --
	* [multinode-603406-m02] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-603406-m02' is duplicated with machine name 'multinode-603406-m02' in profile 'multinode-603406'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-603406-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-603406-m03 --driver=docker  --container-runtime=crio: (23.409694866s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-603406
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-603406: exit status 80 (267.065209ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-603406 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-603406-m03 already exists in multinode-603406-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-603406-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-amd64 delete -p multinode-603406-m03: (1.791753587s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (25.57s)

                                                
                                    
x
+
TestPreload (117.55s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-317623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E0923 11:05:14.702615   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-317623 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m17.346562457s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-317623 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-317623 image pull gcr.io/k8s-minikube/busybox: (3.037711386s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-317623
preload_test.go:58: (dbg) Done: out/minikube-linux-amd64 stop -p test-preload-317623: (5.657863913s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-317623 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
E0923 11:06:30.163878   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-317623 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (29.003165826s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-317623 image list
helpers_test.go:175: Cleaning up "test-preload-317623" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-317623
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-317623: (2.287043007s)
--- PASS: TestPreload (117.55s)

                                                
                                    
x
+
TestScheduledStopUnix (95.66s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-095232 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-095232 --memory=2048 --driver=docker  --container-runtime=crio: (20.154354642s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095232 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-095232 -n scheduled-stop-095232
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095232 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I0923 11:07:03.873783   10562 retry.go:31] will retry after 54.67µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.874949   10562 retry.go:31] will retry after 148.244µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.876094   10562 retry.go:31] will retry after 253.123µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.877254   10562 retry.go:31] will retry after 233.68µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.878389   10562 retry.go:31] will retry after 496.985µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.879525   10562 retry.go:31] will retry after 556.979µs: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.880664   10562 retry.go:31] will retry after 1.263852ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.882869   10562 retry.go:31] will retry after 1.993696ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.885074   10562 retry.go:31] will retry after 3.785925ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.889284   10562 retry.go:31] will retry after 3.818213ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.893559   10562 retry.go:31] will retry after 5.157481ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.899804   10562 retry.go:31] will retry after 4.918782ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.905062   10562 retry.go:31] will retry after 13.12691ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.918305   10562 retry.go:31] will retry after 17.614719ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
I0923 11:07:03.936623   10562 retry.go:31] will retry after 40.812495ms: open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/scheduled-stop-095232/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095232 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095232 -n scheduled-stop-095232
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095232
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-095232 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-095232
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-095232: exit status 7 (64.12398ms)

                                                
                                                
-- stdout --
	scheduled-stop-095232
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095232 -n scheduled-stop-095232
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-095232 -n scheduled-stop-095232: exit status 7 (64.323279ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-095232" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-095232
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p scheduled-stop-095232: (4.237189725s)
--- PASS: TestScheduledStopUnix (95.66s)

                                                
                                    
x
+
TestInsufficientStorage (9.68s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-amd64 start -p insufficient-storage-257702 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p insufficient-storage-257702 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.361411777s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"26260c2b-b53e-4eb2-bee6-d6f4dc65c604","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-257702] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"17ebcbdf-1a98-46a3-b392-6df9c7a0c4de","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19689"}}
	{"specversion":"1.0","id":"80dc8003-bc03-45a3-9d1e-bff8cc6a528b","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"b6a6e051-6e3d-46be-bb9a-cd0397e7aa79","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig"}}
	{"specversion":"1.0","id":"5e6fc147-be80-47cd-895f-55a3b3d9b7fe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube"}}
	{"specversion":"1.0","id":"aaf068cd-0fa1-4298-9871-860f45f91f47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"8f6d7d7b-ef39-4dbc-b6ae-e9fd758bafaa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"f55604ba-b4d0-4dcf-a39e-7f36008f75be","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"23d33853-1799-442c-a819-a53128f748e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"11a93ade-34d6-429e-8c66-135730aa4778","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"a0636c85-6461-4cd8-9675-a102b8cc6b57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"bf64a666-6cad-4a4c-b6e9-5a0b9706efbc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-257702\" primary control-plane node in \"insufficient-storage-257702\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"e30baf96-ae73-4493-a451-3c6e3f1f0889","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1726784731-19672 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"a526e4a4-be26-489f-b02e-f4508995f010","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"3aca3af1-3a61-4e2c-9dcb-92156874c4dc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-257702 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-257702 --output=json --layout=cluster: exit status 7 (251.26884ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-257702","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-257702","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:08:26.598992  193856 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-257702" does not appear in /home/jenkins/minikube-integration/19689-3772/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p insufficient-storage-257702 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p insufficient-storage-257702 --output=json --layout=cluster: exit status 7 (257.38229ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-257702","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-257702","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E0923 11:08:26.855969  193954 status.go:451] kubeconfig endpoint: get endpoint: "insufficient-storage-257702" does not appear in /home/jenkins/minikube-integration/19689-3772/kubeconfig
	E0923 11:08:26.866439  193954 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/insufficient-storage-257702/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-257702" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p insufficient-storage-257702
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p insufficient-storage-257702: (1.813052629s)
--- PASS: TestInsufficientStorage (9.68s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (69.95s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.3842961057 start -p running-upgrade-187782 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.3842961057 start -p running-upgrade-187782 --memory=2200 --vm-driver=docker  --container-runtime=crio: (23.436416065s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-187782 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-187782 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (41.609641461s)
helpers_test.go:175: Cleaning up "running-upgrade-187782" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-187782
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-187782: (2.411785455s)
--- PASS: TestRunningBinaryUpgrade (69.95s)

                                                
                                    
x
+
TestKubernetesUpgrade (358.24s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (42.651758413s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-061700
E0923 11:10:14.700315   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-061700: (9.089555693s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-061700 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-061700 status --format={{.Host}}: exit status 7 (97.33087ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m26.949640702s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-061700 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (81.507067ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-061700] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.1 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-061700
	    minikube start -p kubernetes-upgrade-061700 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0617002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.1, by running:
	    
	    minikube start -p kubernetes-upgrade-061700 --kubernetes-version=v1.31.1
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-061700 --memory=2200 --kubernetes-version=v1.31.1 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.809561127s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-061700" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-061700
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-061700: (2.474182853s)
--- PASS: TestKubernetesUpgrade (358.24s)

                                                
                                    
x
+
TestMissingContainerUpgrade (177.88s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3858505490 start -p missing-upgrade-914228 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3858505490 start -p missing-upgrade-914228 --memory=2200 --driver=docker  --container-runtime=crio: (1m41.994773372s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-914228
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-914228: (10.362159235s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-914228
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-amd64 start -p missing-upgrade-914228 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0923 11:11:30.164309   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-amd64 start -p missing-upgrade-914228 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m1.02348402s)
helpers_test.go:175: Cleaning up "missing-upgrade-914228" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p missing-upgrade-914228
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p missing-upgrade-914228: (2.019076031s)
--- PASS: TestMissingContainerUpgrade (177.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (87.228144ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-408858] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.09s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (28.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408858 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408858 --driver=docker  --container-runtime=crio: (27.947147253s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408858 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (28.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (7.57s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-185135 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-185135 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (181.19184ms)

                                                
                                                
-- stdout --
	* [false-185135] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=19689
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0923 11:08:32.482194  196198 out.go:345] Setting OutFile to fd 1 ...
	I0923 11:08:32.482311  196198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:32.482320  196198 out.go:358] Setting ErrFile to fd 2...
	I0923 11:08:32.482324  196198 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0923 11:08:32.482520  196198 root.go:338] Updating PATH: /home/jenkins/minikube-integration/19689-3772/.minikube/bin
	I0923 11:08:32.483156  196198 out.go:352] Setting JSON to false
	I0923 11:08:32.484158  196198 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent","uptime":3056,"bootTime":1727086656,"procs":232,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1069-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0923 11:08:32.484219  196198 start.go:139] virtualization: kvm guest
	I0923 11:08:32.487004  196198 out.go:177] * [false-185135] minikube v1.34.0 on Ubuntu 20.04 (kvm/amd64)
	I0923 11:08:32.488629  196198 out.go:177]   - MINIKUBE_LOCATION=19689
	I0923 11:08:32.488690  196198 notify.go:220] Checking for updates...
	I0923 11:08:32.492286  196198 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0923 11:08:32.494260  196198 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/19689-3772/kubeconfig
	I0923 11:08:32.496452  196198 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/19689-3772/.minikube
	I0923 11:08:32.498322  196198 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0923 11:08:32.500136  196198 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0923 11:08:32.502665  196198 config.go:182] Loaded profile config "NoKubernetes-408858": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:08:32.502896  196198 config.go:182] Loaded profile config "force-systemd-env-433419": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:08:32.503067  196198 config.go:182] Loaded profile config "offline-crio-391658": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
	I0923 11:08:32.503193  196198 driver.go:394] Setting default libvirt URI to qemu:///system
	I0923 11:08:32.532765  196198 docker.go:123] docker version: linux-27.3.1:Docker Engine - Community
	I0923 11:08:32.532880  196198 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I0923 11:08:32.600074  196198 info.go:266] docker info: {ID:TS6T:UINC:MIYS:RZPA:KS6T:4JQK:7JHN:D6RA:LDP2:MHAE:G32M:C5NQ Containers:3 ContainersRunning:3 ContainersPaused:0 ContainersStopped:0 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:58 OomKillDisable:true NGoroutines:90 SystemTime:2024-09-23 11:08:32.580873414 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1069-gcp OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:x
86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:33647935488 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ubuntu-20-agent Labels:[] ExperimentalBuild:false ServerVersion:27.3.1 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c Expected:7f7fdf5fed64eb6a7caf99b3e12efcf9d60e311c} RuncCommit:{ID:v1.1.14-0-g2c9f560 Expected:v1.1.14-0-g2c9f560} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridg
e-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.17.1] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.29.7] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.23.0]] Warnings:<nil>}}
	I0923 11:08:32.600228  196198 docker.go:318] overlay module found
	I0923 11:08:32.603507  196198 out.go:177] * Using the docker driver based on user configuration
	I0923 11:08:32.604992  196198 start.go:297] selected driver: docker
	I0923 11:08:32.605015  196198 start.go:901] validating driver "docker" against <nil>
	I0923 11:08:32.605030  196198 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0923 11:08:32.607731  196198 out.go:201] 
	W0923 11:08:32.609449  196198 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0923 11:08:32.610923  196198 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-185135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-185135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-185135"

                                                
                                                
----------------------- debugLogs end: false-185135 [took: 7.241751796s] --------------------------------
helpers_test.go:175: Cleaning up "false-185135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-185135
--- PASS: TestNetworkPlugins/group/false (7.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (7.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --driver=docker  --container-runtime=crio: (4.791810431s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-408858 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-408858 status -o json: exit status 2 (332.446422ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-408858","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-408858
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-408858: (1.974595016s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (7.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (8.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408858 --no-kubernetes --driver=docker  --container-runtime=crio: (8.484392791s)
--- PASS: TestNoKubernetes/serial/Start (8.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (269.672038ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.83s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.83s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.19s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-408858
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-408858: (1.194491417s)
--- PASS: TestNoKubernetes/serial/Stop (1.19s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-408858 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-408858 --driver=docker  --container-runtime=crio: (7.21546735s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-408858 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-408858 "sudo systemctl is-active --quiet service kubelet": exit status 1 (246.494238ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.25s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (2.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (141.11s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.4188329460 start -p stopped-upgrade-395551 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.4188329460 start -p stopped-upgrade-395551 --memory=2200 --vm-driver=docker  --container-runtime=crio: (1m45.975735657s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.4188329460 -p stopped-upgrade-395551 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.4188329460 -p stopped-upgrade-395551 stop: (2.376084057s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-395551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-395551 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (32.755220257s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (141.11s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-395551
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (0.94s)

                                                
                                    
x
+
TestPause/serial/Start (45.38s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-828212 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-828212 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (45.38473349s)
--- PASS: TestPause/serial/Start (45.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (43.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (43.909791101s)
--- PASS: TestNetworkPlugins/group/auto/Start (43.91s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (69.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (1m9.358653894s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (69.36s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (37.01s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-828212 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E0923 11:13:17.766527   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-828212 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (36.996904971s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (37.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-185135 "pgrep -a kubelet"
I0923 11:13:38.343635   10562 config.go:182] Loaded profile config "auto-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-rvds2" [ef18288d-4a75-4c36-babf-5b2ed34fd98b] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-rvds2" [ef18288d-4a75-4c36-babf-5b2ed34fd98b] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004370496s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestPause/serial/Pause (0.66s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-828212 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.66s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.29s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-828212 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-828212 --output=json --layout=cluster: exit status 2 (285.141193ms)

                                                
                                                
-- stdout --
	{"Name":"pause-828212","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-828212","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.29s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.59s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-828212 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.59s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.69s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-828212 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.69s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (2.26s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-828212 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-828212 --alsologtostderr -v=5: (2.255360615s)
--- PASS: TestPause/serial/DeletePaused (2.26s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (15.02s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
pause_test.go:142: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (14.961353015s)
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-828212
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-828212: exit status 1 (17.82668ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-828212: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (15.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (59.69s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (59.693571475s)
--- PASS: TestNetworkPlugins/group/calico/Start (59.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (53.98s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (53.981427057s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (53.98s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (5.07s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-xl64v" [29ea8a76-d5f3-4a5e-8489-2c73b0a504b5] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 5.063671971s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (5.07s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-185135 "pgrep -a kubelet"
I0923 11:14:16.983853   10562 config.go:182] Loaded profile config "kindnet-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-qdb8d" [0cd7aa95-f878-459e-8941-35fbce70c194] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-qdb8d" [0cd7aa95-f878-459e-8941-35fbce70c194] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.003710716s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (68.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m8.519868788s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (68.52s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-185135 "pgrep -a kubelet"
I0923 11:15:05.498693   10562 config.go:182] Loaded profile config "custom-flannel-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-vcw7p" [eae4a182-5063-4b46-937c-64a0ca8d1fe5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-vcw7p" [eae4a182-5063-4b46-937c-64a0ca8d1fe5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004030973s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-gwb4c" [ef246d94-4593-4164-8f20-e2b1c4299c78] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.005053471s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-185135 "pgrep -a kubelet"
I0923 11:15:12.637865   10562 config.go:182] Loaded profile config "calico-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-zsgtm" [0b5a5435-3860-4d03-91e1-d65bd3536169] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0923 11:15:14.699704   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-zsgtm" [0b5a5435-3860-4d03-91e1-d65bd3536169] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 10.003936298s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (10.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (57.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (57.340705517s)
--- PASS: TestNetworkPlugins/group/flannel/Start (57.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (64.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-185135 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m4.168960299s)
--- PASS: TestNetworkPlugins/group/bridge/Start (64.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (130.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-097453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-097453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m10.092488293s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (130.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-185135 "pgrep -a kubelet"
I0923 11:15:56.377415   10562 config.go:182] Loaded profile config "enable-default-cni-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-fnfqb" [af20c1d4-ad20-4d61-b3b7-199ce3584e54] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-fnfqb" [af20c1d4-ad20-4d61-b3b7-199ce3584e54] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 10.004385569s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (10.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-bjk4v" [a2b53a48-ef08-411c-ba82-66b8304262ef] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003623192s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-441416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 11:16:30.164577   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-441416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m1.000193277s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (61.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-185135 "pgrep -a kubelet"
I0923 11:16:31.987814   10562 config.go:182] Loaded profile config "flannel-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-p8jsd" [079f8e58-625c-42be-a9cb-f824e8ab13f0] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-p8jsd" [079f8e58-625c-42be-a9cb-f824e8ab13f0] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004279691s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-185135 "pgrep -a kubelet"
I0923 11:16:42.052560   10562 config.go:182] Loaded profile config "bridge-185135": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.1
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-185135 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-cbh2r" [0141f4f2-8343-4e1a-bcb3-2f57d139919d] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-cbh2r" [0141f4f2-8343-4e1a-bcb3-2f57d139919d] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004191315s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-185135 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-185135 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.15s)
E0923 11:21:35.977987   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:37.582575   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.251104   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.257556   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.268965   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.290437   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.331890   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.413343   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.575303   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:42.897009   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:43.539210   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:44.821336   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:46.219977   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:47.382791   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:52.504781   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:55.462689   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:22:02.746180   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:22:06.702182   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (45.6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-276118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-276118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (45.595880421s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (45.60s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-835022 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-835022 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (1m11.408759437s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (71.41s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-441416 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f592327c-23e9-46c5-be66-7019659de3e2] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f592327c-23e9-46c5-be66-7019659de3e2] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 9.004115454s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-441416 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (9.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-441416 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-441416 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-441416 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p no-preload-441416 --alsologtostderr -v=3: (12.122236515s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.12s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-276118 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [da03a0bc-9684-4d13-8674-59395d26e36a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [da03a0bc-9684-4d13-8674-59395d26e36a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004576794s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-276118 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441416 -n no-preload-441416
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441416 -n no-preload-441416: exit status 7 (64.209042ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-441416 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (261.94s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-441416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-441416 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.639452182s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-441416 -n no-preload-441416
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (261.94s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (10.4s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-097453 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2a54cb82-26dc-4220-8d48-caafe9e58121] Pending
helpers_test.go:344: "busybox" [2a54cb82-26dc-4220-8d48-caafe9e58121] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2a54cb82-26dc-4220-8d48-caafe9e58121] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 10.004094543s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-097453 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (10.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-276118 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-276118 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.98s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-276118 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p embed-certs-276118 --alsologtostderr -v=3: (12.385644724s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-097453 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-097453 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.88s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-097453 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-097453 --alsologtostderr -v=3: (12.435277396s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.44s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-276118 -n embed-certs-276118
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-276118 -n embed-certs-276118: exit status 7 (71.51106ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-276118 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.18s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (262.9s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-276118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-276118 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m22.613937882s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-276118 -n embed-certs-276118
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (262.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-097453 -n old-k8s-version-097453
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-097453 -n old-k8s-version-097453: exit status 7 (62.742386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-097453 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (128.45s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-097453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p old-k8s-version-097453 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m8.154541723s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-097453 -n old-k8s-version-097453
E0923 11:20:26.875010   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (128.45s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-835022 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dd48bb76-4bb6-4db7-888d-55938bd3ccbe] Pending
helpers_test.go:344: "busybox" [dd48bb76-4bb6-4db7-888d-55938bd3ccbe] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dd48bb76-4bb6-4db7-888d-55938bd3ccbe] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.00366173s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-835022 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.29s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-835022 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-835022 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-835022 --alsologtostderr -v=3
E0923 11:18:38.574857   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.581227   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.592653   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.614104   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.655641   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.737165   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:38.898771   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:39.223232   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:39.864898   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:41.146649   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:43.708757   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:18:48.830701   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p default-k8s-diff-port-835022 --alsologtostderr -v=3: (12.594696349s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (12.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022: exit status 7 (73.778387ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-835022 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.19s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (261.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-835022 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 11:18:59.072029   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.604426   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.610844   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.622218   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.643581   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.685048   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.766733   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:11.928237   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:12.249995   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:12.891924   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:14.173195   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:16.734602   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:19.553968   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:21.856307   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:32.098014   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:33.229109   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/functional-676470/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:19:52.579547   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:00.515431   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.717678   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.724058   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.735340   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.756814   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.798273   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:05.879811   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.041475   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.363588   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.379113   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.385489   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.396973   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.418391   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.459841   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.541300   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:06.702933   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:07.005579   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:07.024967   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:07.666982   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:08.287615   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:08.948891   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:10.849051   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:11.510870   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:14.699499   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/addons-445250/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:15.970571   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:16.632948   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:26.212120   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-835022 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (4m21.627574564s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (261.92s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nrtsk" [f19bc07b-1f63-49e8-80e2-a7bc1dbadc47] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005583657s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-nrtsk" [f19bc07b-1f63-49e8-80e2-a7bc1dbadc47] Running
E0923 11:20:33.541169   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/kindnet-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004852766s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-097453 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.08s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-097453 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20210326-1e038dc5
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p old-k8s-version-097453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-097453 -n old-k8s-version-097453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-097453 -n old-k8s-version-097453: exit status 2 (284.227458ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-097453 -n old-k8s-version-097453
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-097453 -n old-k8s-version-097453: exit status 2 (285.374588ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p old-k8s-version-097453 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-097453 -n old-k8s-version-097453
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p old-k8s-version-097453 -n old-k8s-version-097453
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (2.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (28.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-153049 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 11:20:46.693966   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:47.357090   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.604961   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.611337   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.622743   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.644312   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.685996   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.768079   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:56.929957   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:57.252181   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:57.893961   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:20:59.176166   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:01.737658   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:06.859844   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-153049 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (28.639483356s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (28.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-153049 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-153049 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-153049 --alsologtostderr -v=3: (1.196160155s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.20s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-153049 -n newest-cni-153049
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-153049 -n newest-cni-153049: exit status 7 (63.464319ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-153049 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.17s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-153049 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1
E0923 11:21:17.101172   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:22.437323   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/auto-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.725331   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.731724   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.743122   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.764465   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.805806   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:25.887305   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:26.048775   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:26.370447   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-153049 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.1: (12.337817247s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-153049 -n newest-cni-153049
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (12.64s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-153049 image list --format=json
E0923 11:21:27.012635   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-153049 --alsologtostderr -v=1
E0923 11:21:27.656014   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-153049 -n newest-cni-153049
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-153049 -n newest-cni-153049: exit status 2 (285.895035ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-153049 -n newest-cni-153049
E0923 11:21:28.294438   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
E0923 11:21:28.318862   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/calico-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-153049 -n newest-cni-153049: exit status 2 (282.709544ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-153049 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-153049 -n newest-cni-153049
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-153049 -n newest-cni-153049
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.64s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-46pb7" [14f3fd1c-7ac7-4858-a26d-372a9e864bd2] Running
E0923 11:22:18.544106   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/enable-default-cni-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004196479s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-46pb7" [14f3fd1c-7ac7-4858-a26d-372a9e864bd2] Running
E0923 11:22:23.227882   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/bridge-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003116001s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-441416 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-441416 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p no-preload-441416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441416 -n no-preload-441416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441416 -n no-preload-441416: exit status 2 (294.321301ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-441416 -n no-preload-441416
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-441416 -n no-preload-441416: exit status 2 (295.684185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p no-preload-441416 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-441416 -n no-preload-441416
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p no-preload-441416 -n no-preload-441416
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.61s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zgh6c" [74a07fca-81d9-43b0-bb13-f817fb5f123e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003488966s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-zgh6c" [74a07fca-81d9-43b0-bb13-f817fb5f123e] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00418912s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-276118 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-276118 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p embed-certs-276118 --alsologtostderr -v=1
E0923 11:22:47.663903   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-276118 -n embed-certs-276118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-276118 -n embed-certs-276118: exit status 2 (282.41326ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-276118 -n embed-certs-276118
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-276118 -n embed-certs-276118: exit status 2 (283.672883ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p embed-certs-276118 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-276118 -n embed-certs-276118
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p embed-certs-276118 -n embed-certs-276118
E0923 11:22:49.577852   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/custom-flannel-185135/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.59s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2srfq" [97d904a5-91e7-4674-91c1-add172fde062] Running
E0923 11:23:15.407405   10562 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/19689-3772/.minikube/profiles/old-k8s-version-097453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003573445s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-2srfq" [97d904a5-91e7-4674-91c1-add172fde062] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00428445s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-835022 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-835022 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20240813-c6f155d6
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p default-k8s-diff-port-835022 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022: exit status 2 (280.743403ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022: exit status 2 (280.239186ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p default-k8s-diff-port-835022 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-835022 -n default-k8s-diff-port-835022
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.55s)

                                                
                                    

Test skip (25/327)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.1/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.1/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.1/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.1/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.1/kubectl (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:817: skipping: crio not supported
--- SKIP: TestAddons/serial/Volcano (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:438: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-185135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-185135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-185135"

                                                
                                                
----------------------- debugLogs end: kubenet-185135 [took: 3.569885356s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-185135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-185135
--- SKIP: TestNetworkPlugins/group/kubenet (3.75s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (3.75s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-185135 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-185135" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-185135

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-185135" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-185135"

                                                
                                                
----------------------- debugLogs end: cilium-185135 [took: 3.596896918s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-185135" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-185135
--- SKIP: TestNetworkPlugins/group/cilium (3.75s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-113031" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-113031
--- SKIP: TestStartStop/group/disable-driver-mounts (0.19s)

                                                
                                    
Copied to clipboard