Test Report: Docker_Linux_crio_arm64 20107

                    
                      8d7d309004e1c5aed2c11e9a2f72e102a81e4e45:2024-12-16:37505
                    
                

Test fail (4/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 153.32
38 TestAddons/parallel/MetricsServer 349.46
99 TestFunctional/parallel/PersistentVolumeClaim 188.78
135 TestFunctional/parallel/MountCmd/specific-port 14.74
x
+
TestAddons/parallel/Ingress (153.32s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-467441 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-467441 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-467441 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [92d8966d-073f-4e3f-9871-fed74ae04661] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [92d8966d-073f-4e3f-9871-fed74ae04661] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.004505642s
I1216 11:19:14.682888 1137938 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-467441 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m9.79284563s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-467441 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-467441
helpers_test.go:235: (dbg) docker inspect addons-467441:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1",
	        "Created": "2024-12-16T11:13:39.779228172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1139202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T11:13:39.92485375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
	        "ResolvConfPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/hosts",
	        "LogPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1-json.log",
	        "Name": "/addons-467441",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-467441:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-467441",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4-init/diff:/var/lib/docker/overlay2/d13e29c6821a56996707870a44a8892ca6c52b8aaf1d7542bba33ae7dbaaadff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-467441",
	                "Source": "/var/lib/docker/volumes/addons-467441/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-467441",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-467441",
	                "name.minikube.sigs.k8s.io": "addons-467441",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bea63fe2b36c7b0f624b7fa9af015cee1b3760acef7ae0b98c97292912ff22aa",
	            "SandboxKey": "/var/run/docker/netns/bea63fe2b36c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-467441": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d6fcce2171d5d2c661d67be0fa4b0eab5ab56b6725de74e30593899084a47d1a",
	                    "EndpointID": "a04e33c4656fe0bd1d5b56a524679b1e33117dcb8806e2d199ed23958ab0a5e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-467441",
	                        "29320d75ba42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-467441 -n addons-467441
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 logs -n 25: (1.611485182s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| delete  | -p download-only-333054              | download-only-333054   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| start   | -o=json --download-only              | download-only-400206   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | -p download-only-400206              |                        |         |         |                     |                     |
	|         | --force --alsologtostderr            |                        |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2         |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | --all                                | minikube               | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| delete  | -p download-only-400206              | download-only-400206   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| delete  | -p download-only-333054              | download-only-333054   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| delete  | -p download-only-400206              | download-only-400206   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| start   | --download-only -p                   | download-docker-168069 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | download-docker-168069               |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p download-docker-168069            | download-docker-168069 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| start   | --download-only -p                   | binary-mirror-469402   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | binary-mirror-469402                 |                        |         |         |                     |                     |
	|         | --alsologtostderr                    |                        |         |         |                     |                     |
	|         | --binary-mirror                      |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43945               |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-469402              | binary-mirror-469402   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| addons  | enable dashboard -p                  | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | addons-467441                        |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                 | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | addons-467441                        |                        |         |         |                     |                     |
	| start   | -p addons-467441 --wait=true         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:17 UTC |
	|         | --memory=4000 --alsologtostderr      |                        |         |         |                     |                     |
	|         | --addons=registry                    |                        |         |         |                     |                     |
	|         | --addons=metrics-server              |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots             |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver         |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                    |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner               |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget            |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin        |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano       |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin       |                        |         |         |                     |                     |
	|         | --driver=docker                      |                        |         |         |                     |                     |
	|         | --container-runtime=crio             |                        |         |         |                     |                     |
	|         | --addons=ingress                     |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                 |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | volcano --alsologtostderr -v=1       |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | gcp-auth --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | enable headlamp                      | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | -p addons-467441                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | headlamp --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| ip      | addons-467441 ip                     | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	| addons  | addons-467441 addons disable         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | registry --alsologtostderr           |                        |         |         |                     |                     |
	|         | -v=1                                 |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                 | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:18 UTC |
	|         | disable volumesnapshots              |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                 | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:18 UTC |
	|         | disable csi-hostpath-driver          |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                 | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:19 UTC |
	|         | disable inspektor-gadget             |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1               |                        |         |         |                     |                     |
	| ssh     | addons-467441 ssh curl -s            | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:          |                        |         |         |                     |                     |
	|         | nginx.example.com'                   |                        |         |         |                     |                     |
	| ip      | addons-467441 ip                     | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|---------|--------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:13:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:13:14.155364 1138702 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:13:14.155570 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:14.155598 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:13:14.155617 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:14.156122 1138702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:13:14.156636 1138702 out.go:352] Setting JSON to false
	I1216 11:13:14.157594 1138702 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28540,"bootTime":1734319055,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:13:14.157695 1138702 start.go:139] virtualization:  
	I1216 11:13:14.161349 1138702 out.go:177] * [addons-467441] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 11:13:14.164247 1138702 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:13:14.164369 1138702 notify.go:220] Checking for updates...
	I1216 11:13:14.169832 1138702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:13:14.172663 1138702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:13:14.175554 1138702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:13:14.178400 1138702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 11:13:14.181223 1138702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:13:14.184324 1138702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:13:14.210384 1138702 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:13:14.210510 1138702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:14.272602 1138702 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 11:13:14.263710896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:14.272736 1138702 docker.go:318] overlay module found
	I1216 11:13:14.275999 1138702 out.go:177] * Using the docker driver based on user configuration
	I1216 11:13:14.278827 1138702 start.go:297] selected driver: docker
	I1216 11:13:14.278845 1138702 start.go:901] validating driver "docker" against <nil>
	I1216 11:13:14.278857 1138702 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:13:14.279624 1138702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:14.329342 1138702 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 11:13:14.320797195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:14.329565 1138702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:13:14.329789 1138702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:13:14.332735 1138702 out.go:177] * Using Docker driver with root privileges
	I1216 11:13:14.335754 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:13:14.335827 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:13:14.335848 1138702 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 11:13:14.335935 1138702 start.go:340] cluster config:
	{Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:13:14.339080 1138702 out.go:177] * Starting "addons-467441" primary control-plane node in "addons-467441" cluster
	I1216 11:13:14.341906 1138702 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 11:13:14.344871 1138702 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1216 11:13:14.347726 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:14.347811 1138702 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1216 11:13:14.347822 1138702 cache.go:56] Caching tarball of preloaded images
	I1216 11:13:14.347832 1138702 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1216 11:13:14.347905 1138702 preload.go:172] Found /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 11:13:14.347915 1138702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 11:13:14.348279 1138702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json ...
	I1216 11:13:14.348310 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json: {Name:mk0880d5bf7802bbb02fd0af2735bb69c597982f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:14.363961 1138702 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 11:13:14.364071 1138702 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1216 11:13:14.364089 1138702 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1216 11:13:14.364094 1138702 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1216 11:13:14.364101 1138702 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1216 11:13:14.364107 1138702 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1216 11:13:31.736407 1138702 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1216 11:13:31.736459 1138702 cache.go:194] Successfully downloaded all kic artifacts
	I1216 11:13:31.736510 1138702 start.go:360] acquireMachinesLock for addons-467441: {Name:mkb047cb330c474c9d07841e4319f52660cec1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:13:31.736662 1138702 start.go:364] duration metric: took 128.546µs to acquireMachinesLock for "addons-467441"
	I1216 11:13:31.736691 1138702 start.go:93] Provisioning new machine with config: &{Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:13:31.736794 1138702 start.go:125] createHost starting for "" (driver="docker")
	I1216 11:13:31.740297 1138702 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1216 11:13:31.740565 1138702 start.go:159] libmachine.API.Create for "addons-467441" (driver="docker")
	I1216 11:13:31.740602 1138702 client.go:168] LocalClient.Create starting
	I1216 11:13:31.740721 1138702 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem
	I1216 11:13:32.903436 1138702 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem
	I1216 11:13:33.944605 1138702 cli_runner.go:164] Run: docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 11:13:33.960119 1138702 cli_runner.go:211] docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 11:13:33.960206 1138702 network_create.go:284] running [docker network inspect addons-467441] to gather additional debugging logs...
	I1216 11:13:33.960227 1138702 cli_runner.go:164] Run: docker network inspect addons-467441
	W1216 11:13:33.975828 1138702 cli_runner.go:211] docker network inspect addons-467441 returned with exit code 1
	I1216 11:13:33.975865 1138702 network_create.go:287] error running [docker network inspect addons-467441]: docker network inspect addons-467441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-467441 not found
	I1216 11:13:33.975880 1138702 network_create.go:289] output of [docker network inspect addons-467441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-467441 not found
	
	** /stderr **
	I1216 11:13:33.975980 1138702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 11:13:33.992146 1138702 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b3080}
	I1216 11:13:33.992191 1138702 network_create.go:124] attempt to create docker network addons-467441 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 11:13:33.992248 1138702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-467441 addons-467441
	I1216 11:13:34.067409 1138702 network_create.go:108] docker network addons-467441 192.168.49.0/24 created
	I1216 11:13:34.067444 1138702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-467441" container
	I1216 11:13:34.067536 1138702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 11:13:34.084637 1138702 cli_runner.go:164] Run: docker volume create addons-467441 --label name.minikube.sigs.k8s.io=addons-467441 --label created_by.minikube.sigs.k8s.io=true
	I1216 11:13:34.102956 1138702 oci.go:103] Successfully created a docker volume addons-467441
	I1216 11:13:34.103082 1138702 cli_runner.go:164] Run: docker run --rm --name addons-467441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --entrypoint /usr/bin/test -v addons-467441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1216 11:13:35.650028 1138702 cli_runner.go:217] Completed: docker run --rm --name addons-467441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --entrypoint /usr/bin/test -v addons-467441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (1.546897633s)
	I1216 11:13:35.650057 1138702 oci.go:107] Successfully prepared a docker volume addons-467441
	I1216 11:13:35.650086 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:35.650106 1138702 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 11:13:35.650175 1138702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-467441:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 11:13:39.713697 1138702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-467441:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.063483647s)
	I1216 11:13:39.713732 1138702 kic.go:203] duration metric: took 4.06362355s to extract preloaded images to volume ...
	W1216 11:13:39.713879 1138702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 11:13:39.713998 1138702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 11:13:39.764996 1138702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-467441 --name addons-467441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-467441 --network addons-467441 --ip 192.168.49.2 --volume addons-467441:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1216 11:13:40.130783 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Running}}
	I1216 11:13:40.163054 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:40.186297 1138702 cli_runner.go:164] Run: docker exec addons-467441 stat /var/lib/dpkg/alternatives/iptables
	I1216 11:13:40.235189 1138702 oci.go:144] the created container "addons-467441" has a running status.
	I1216 11:13:40.235217 1138702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa...
	I1216 11:13:40.890711 1138702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 11:13:40.924229 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:40.942928 1138702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 11:13:40.942949 1138702 kic_runner.go:114] Args: [docker exec --privileged addons-467441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 11:13:41.007854 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:41.026726 1138702 machine.go:93] provisionDockerMachine start ...
	I1216 11:13:41.026820 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.049284 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.049534 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.049543 1138702 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:13:41.188526 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467441
	
	I1216 11:13:41.188597 1138702 ubuntu.go:169] provisioning hostname "addons-467441"
	I1216 11:13:41.188693 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.211264 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.211525 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.211539 1138702 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-467441 && echo "addons-467441" | sudo tee /etc/hostname
	I1216 11:13:41.356815 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467441
	
	I1216 11:13:41.356903 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.375789 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.376045 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.376069 1138702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-467441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-467441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-467441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:13:41.512784 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:13:41.512818 1138702 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20107-1132549/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-1132549/.minikube}
	I1216 11:13:41.512849 1138702 ubuntu.go:177] setting up certificates
	I1216 11:13:41.512858 1138702 provision.go:84] configureAuth start
	I1216 11:13:41.512923 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:41.530459 1138702 provision.go:143] copyHostCerts
	I1216 11:13:41.530545 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.pem (1078 bytes)
	I1216 11:13:41.530678 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/cert.pem (1123 bytes)
	I1216 11:13:41.530742 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/key.pem (1679 bytes)
	I1216 11:13:41.530801 1138702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem org=jenkins.addons-467441 san=[127.0.0.1 192.168.49.2 addons-467441 localhost minikube]
	I1216 11:13:41.857987 1138702 provision.go:177] copyRemoteCerts
	I1216 11:13:41.858056 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:13:41.858099 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.878051 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:41.973963 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:13:41.998468 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 11:13:42.027072 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:13:42.052364 1138702 provision.go:87] duration metric: took 539.491064ms to configureAuth
	I1216 11:13:42.052397 1138702 ubuntu.go:193] setting minikube options for container-runtime
	I1216 11:13:42.052588 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:13:42.052706 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.071074 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:42.071363 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:42.071390 1138702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:13:42.314277 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:13:42.314300 1138702 machine.go:96] duration metric: took 1.28755524s to provisionDockerMachine
	I1216 11:13:42.314311 1138702 client.go:171] duration metric: took 10.573699241s to LocalClient.Create
	I1216 11:13:42.314325 1138702 start.go:167] duration metric: took 10.573761959s to libmachine.API.Create "addons-467441"
	I1216 11:13:42.314332 1138702 start.go:293] postStartSetup for "addons-467441" (driver="docker")
	I1216 11:13:42.314344 1138702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:13:42.314413 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:13:42.314460 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.333948 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.430859 1138702 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:13:42.434252 1138702 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 11:13:42.434291 1138702 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 11:13:42.434303 1138702 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 11:13:42.434311 1138702 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 11:13:42.434322 1138702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-1132549/.minikube/addons for local assets ...
	I1216 11:13:42.434440 1138702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-1132549/.minikube/files for local assets ...
	I1216 11:13:42.434479 1138702 start.go:296] duration metric: took 120.139349ms for postStartSetup
	I1216 11:13:42.434813 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:42.452587 1138702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json ...
	I1216 11:13:42.452952 1138702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:13:42.453008 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.471315 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.561844 1138702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 11:13:42.566651 1138702 start.go:128] duration metric: took 10.829838817s to createHost
	I1216 11:13:42.566682 1138702 start.go:83] releasing machines lock for "addons-467441", held for 10.830007945s
	I1216 11:13:42.566789 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:42.583497 1138702 ssh_runner.go:195] Run: cat /version.json
	I1216 11:13:42.583554 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.583808 1138702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:13:42.583887 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.601939 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.608900 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.696102 1138702 ssh_runner.go:195] Run: systemctl --version
	I1216 11:13:42.827648 1138702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:13:42.968154 1138702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 11:13:42.972430 1138702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:13:42.993395 1138702 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1216 11:13:42.993483 1138702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:13:43.033143 1138702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1216 11:13:43.033209 1138702 start.go:495] detecting cgroup driver to use...
	I1216 11:13:43.033256 1138702 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 11:13:43.033341 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:13:43.050220 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:13:43.062437 1138702 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:13:43.062543 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:13:43.076205 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:13:43.090629 1138702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:13:43.171417 1138702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:13:43.269351 1138702 docker.go:233] disabling docker service ...
	I1216 11:13:43.269420 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:13:43.290166 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:13:43.302424 1138702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:13:43.393044 1138702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:13:43.486576 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:13:43.498264 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:13:43.514159 1138702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:13:43.514231 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.523461 1138702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:13:43.523581 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.533571 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.543698 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.553926 1138702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:13:43.563206 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.572869 1138702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.588024 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.597303 1138702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:13:43.605557 1138702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:13:43.613884 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:13:43.701222 1138702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:13:43.829646 1138702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:13:43.829906 1138702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:13:43.834796 1138702 start.go:563] Will wait 60s for crictl version
	I1216 11:13:43.834909 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:13:43.838557 1138702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:13:43.879735 1138702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1216 11:13:43.879906 1138702 ssh_runner.go:195] Run: crio --version
	I1216 11:13:43.922760 1138702 ssh_runner.go:195] Run: crio --version
	I1216 11:13:43.965473 1138702 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1216 11:13:43.968338 1138702 cli_runner.go:164] Run: docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 11:13:43.984890 1138702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 11:13:43.988408 1138702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:13:43.999027 1138702 kubeadm.go:883] updating cluster {Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:13:43.999147 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:43.999213 1138702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:13:44.085060 1138702 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:13:44.085083 1138702 crio.go:433] Images already preloaded, skipping extraction
	I1216 11:13:44.085138 1138702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:13:44.124719 1138702 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:13:44.124744 1138702 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:13:44.124773 1138702 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1216 11:13:44.124880 1138702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-467441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:13:44.124982 1138702 ssh_runner.go:195] Run: crio config
	I1216 11:13:44.199284 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:13:44.199307 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:13:44.199318 1138702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:13:44.199342 1138702 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-467441 NodeName:addons-467441 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:13:44.199479 1138702 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-467441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:13:44.199557 1138702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:13:44.208470 1138702 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:13:44.208548 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:13:44.217262 1138702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 11:13:44.235644 1138702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:13:44.254591 1138702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1216 11:13:44.272884 1138702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 11:13:44.276284 1138702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:13:44.287079 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:13:44.376317 1138702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:13:44.389899 1138702 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441 for IP: 192.168.49.2
	I1216 11:13:44.389931 1138702 certs.go:194] generating shared ca certs ...
	I1216 11:13:44.389965 1138702 certs.go:226] acquiring lock for ca certs: {Name:mk010ea4b11a1a3a57224479eec9717d60444c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.390134 1138702 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key
	I1216 11:13:44.825309 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt ...
	I1216 11:13:44.825340 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt: {Name:mke58c373925d39f5dfe073658cbfc0208df6c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.826188 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key ...
	I1216 11:13:44.826207 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key: {Name:mka2264f04472ab6f16e0c77f2395ac6c64d531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.826895 1138702 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key
	I1216 11:13:45.362285 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt ...
	I1216 11:13:45.362394 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt: {Name:mk1778f8456039df95618c7d6840b9eb924220c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:45.362580 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key ...
	I1216 11:13:45.362596 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key: {Name:mkf8cdfd6a5cac6139ec81f20a14ef50e56d1477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:45.363309 1138702 certs.go:256] generating profile certs ...
	I1216 11:13:45.363382 1138702 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key
	I1216 11:13:45.363408 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt with IP's: []
	I1216 11:13:46.021702 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt ...
	I1216 11:13:46.021742 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: {Name:mkbe5d1f751761ada51ffd61defa3d5bf59ca7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:46.021970 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key ...
	I1216 11:13:46.021986 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key: {Name:mkaa5e181cc6518cdfa1b39e3d8ed34b2e04c552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:46.022089 1138702 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe
	I1216 11:13:46.022112 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 11:13:47.046993 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe ...
	I1216 11:13:47.047026 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe: {Name:mkef6cdde359c30fcaa658078332adc7b9c4f793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.047231 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe ...
	I1216 11:13:47.047246 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe: {Name:mk3ef25461d52a94fd2ace0b91b2d5657eeac57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.047337 1138702 certs.go:381] copying /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe -> /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt
	I1216 11:13:47.047433 1138702 certs.go:385] copying /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe -> /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key
	I1216 11:13:47.047493 1138702 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key
	I1216 11:13:47.047516 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt with IP's: []
	I1216 11:13:47.617258 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt ...
	I1216 11:13:47.617291 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt: {Name:mk7583908de7b7da789bffe19ba40e7022cc5497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.618162 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key ...
	I1216 11:13:47.618185 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key: {Name:mk8aef23970592be9a0b81f8db808d51ad4c4c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.619007 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 11:13:47.619055 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:13:47.619092 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:13:47.619123 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem (1679 bytes)
	I1216 11:13:47.619809 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:13:47.644616 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 11:13:47.669646 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:13:47.693649 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 11:13:47.718365 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 11:13:47.741899 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:13:47.765868 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:13:47.790121 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 11:13:47.813935 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:13:47.838875 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:13:47.858546 1138702 ssh_runner.go:195] Run: openssl version
	I1216 11:13:47.863964 1138702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:13:47.873330 1138702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.876606 1138702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 11:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.876677 1138702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.884122 1138702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:13:47.893782 1138702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:13:47.897187 1138702 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 11:13:47.897239 1138702 kubeadm.go:392] StartCluster: {Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:13:47.897322 1138702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:13:47.897398 1138702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:13:47.944721 1138702 cri.go:89] found id: ""
	I1216 11:13:47.944817 1138702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:13:47.953814 1138702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:13:47.962925 1138702 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1216 11:13:47.963036 1138702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:13:47.971792 1138702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:13:47.971814 1138702 kubeadm.go:157] found existing configuration files:
	
	I1216 11:13:47.971890 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:13:47.980741 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:13:47.980839 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:13:47.989331 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:13:47.998398 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:13:47.998517 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:13:48.008279 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:13:48.018452 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:13:48.018533 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:13:48.027827 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:13:48.037828 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:13:48.037963 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:13:48.047889 1138702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 11:13:48.090451 1138702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 11:13:48.090513 1138702 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:13:48.109689 1138702 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1216 11:13:48.109766 1138702 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1216 11:13:48.109807 1138702 kubeadm.go:310] OS: Linux
	I1216 11:13:48.109864 1138702 kubeadm.go:310] CGROUPS_CPU: enabled
	I1216 11:13:48.109917 1138702 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1216 11:13:48.109967 1138702 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1216 11:13:48.110021 1138702 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1216 11:13:48.110073 1138702 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1216 11:13:48.110128 1138702 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1216 11:13:48.110176 1138702 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1216 11:13:48.110229 1138702 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1216 11:13:48.110278 1138702 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1216 11:13:48.169671 1138702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:13:48.169787 1138702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:13:48.169890 1138702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 11:13:48.176450 1138702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:13:48.183351 1138702 out.go:235]   - Generating certificates and keys ...
	I1216 11:13:48.183491 1138702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:13:48.183569 1138702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:13:48.378862 1138702 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 11:13:49.300047 1138702 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 11:13:50.186067 1138702 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 11:13:50.780969 1138702 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 11:13:51.097828 1138702 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 11:13:51.097980 1138702 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-467441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 11:13:51.339457 1138702 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 11:13:51.339955 1138702 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-467441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 11:13:51.540511 1138702 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 11:13:52.237262 1138702 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 11:13:52.697623 1138702 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 11:13:52.697827 1138702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:13:52.895081 1138702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:13:53.855936 1138702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 11:13:54.042422 1138702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:13:54.344492 1138702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:13:54.838688 1138702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:13:54.839279 1138702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:13:54.842285 1138702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:13:54.845674 1138702 out.go:235]   - Booting up control plane ...
	I1216 11:13:54.845780 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:13:54.845863 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:13:54.845940 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:13:54.855044 1138702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:13:54.861005 1138702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:13:54.861247 1138702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:13:54.961366 1138702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 11:13:54.961515 1138702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 11:13:55.963423 1138702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001912945s
	I1216 11:13:55.963523 1138702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 11:14:02.464634 1138702 kubeadm.go:310] [api-check] The API server is healthy after 6.501456843s
	I1216 11:14:02.489573 1138702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 11:14:02.505609 1138702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 11:14:02.531611 1138702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 11:14:02.531807 1138702 kubeadm.go:310] [mark-control-plane] Marking the node addons-467441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 11:14:02.544803 1138702 kubeadm.go:310] [bootstrap-token] Using token: m5v603.9e6ugxdm6391fj1l
	I1216 11:14:02.547845 1138702 out.go:235]   - Configuring RBAC rules ...
	I1216 11:14:02.547974 1138702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 11:14:02.552264 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 11:14:02.560678 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 11:14:02.569321 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 11:14:02.578230 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 11:14:02.584172 1138702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 11:14:02.873125 1138702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 11:14:03.317576 1138702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 11:14:03.871684 1138702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 11:14:03.872904 1138702 kubeadm.go:310] 
	I1216 11:14:03.872981 1138702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 11:14:03.872996 1138702 kubeadm.go:310] 
	I1216 11:14:03.873074 1138702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 11:14:03.873083 1138702 kubeadm.go:310] 
	I1216 11:14:03.873110 1138702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 11:14:03.873172 1138702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 11:14:03.873227 1138702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 11:14:03.873235 1138702 kubeadm.go:310] 
	I1216 11:14:03.873290 1138702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 11:14:03.873298 1138702 kubeadm.go:310] 
	I1216 11:14:03.873346 1138702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 11:14:03.873354 1138702 kubeadm.go:310] 
	I1216 11:14:03.873407 1138702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 11:14:03.873486 1138702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 11:14:03.873565 1138702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 11:14:03.873574 1138702 kubeadm.go:310] 
	I1216 11:14:03.873659 1138702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 11:14:03.873738 1138702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 11:14:03.873744 1138702 kubeadm.go:310] 
	I1216 11:14:03.873829 1138702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m5v603.9e6ugxdm6391fj1l \
	I1216 11:14:03.873941 1138702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:732496d321a5163361c7bb7221bca3ef9277db1e77b552da68d2c35a6f9c3ac6 \
	I1216 11:14:03.873967 1138702 kubeadm.go:310] 	--control-plane 
	I1216 11:14:03.873974 1138702 kubeadm.go:310] 
	I1216 11:14:03.874059 1138702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 11:14:03.874068 1138702 kubeadm.go:310] 
	I1216 11:14:03.874152 1138702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m5v603.9e6ugxdm6391fj1l \
	I1216 11:14:03.874259 1138702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:732496d321a5163361c7bb7221bca3ef9277db1e77b552da68d2c35a6f9c3ac6 
	I1216 11:14:03.877479 1138702 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1216 11:14:03.877599 1138702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:14:03.877621 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:14:03.877629 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:14:03.882423 1138702 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1216 11:14:03.885317 1138702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 11:14:03.888900 1138702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1216 11:14:03.888967 1138702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 11:14:03.906618 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 11:14:04.195666 1138702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:14:04.195802 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:04.195899 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-467441 minikube.k8s.io/updated_at=2024_12_16T11_14_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-467441 minikube.k8s.io/primary=true
	I1216 11:14:04.339273 1138702 ops.go:34] apiserver oom_adj: -16
	I1216 11:14:04.339455 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:04.840077 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:05.340192 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:05.840256 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:06.339510 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:06.840205 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.339993 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.839507 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.930343 1138702 kubeadm.go:1113] duration metric: took 3.734585351s to wait for elevateKubeSystemPrivileges
	I1216 11:14:07.930370 1138702 kubeadm.go:394] duration metric: took 20.033135259s to StartCluster
	I1216 11:14:07.930387 1138702 settings.go:142] acquiring lock: {Name:mkb28b824e30aa946b7dc0b254d517c0b70b9782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:14:07.931270 1138702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:14:07.931695 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/kubeconfig: {Name:mka4860de2b5135bd0f5db65e71bb8db0bcf8bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:14:07.931891 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 11:14:07.931914 1138702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:14:07.932155 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:14:07.932185 1138702 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 11:14:07.932278 1138702 addons.go:69] Setting yakd=true in profile "addons-467441"
	I1216 11:14:07.932292 1138702 addons.go:234] Setting addon yakd=true in "addons-467441"
	I1216 11:14:07.932316 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.932903 1138702 addons.go:69] Setting inspektor-gadget=true in profile "addons-467441"
	I1216 11:14:07.932920 1138702 addons.go:234] Setting addon inspektor-gadget=true in "addons-467441"
	I1216 11:14:07.932961 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.933054 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.933462 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.934380 1138702 addons.go:69] Setting metrics-server=true in profile "addons-467441"
	I1216 11:14:07.934410 1138702 addons.go:234] Setting addon metrics-server=true in "addons-467441"
	I1216 11:14:07.934462 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.935023 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.938942 1138702 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-467441"
	I1216 11:14:07.938980 1138702 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-467441"
	I1216 11:14:07.939013 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.939558 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.952076 1138702 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-467441"
	I1216 11:14:07.952152 1138702 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-467441"
	I1216 11:14:07.952205 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.952916 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.957637 1138702 addons.go:69] Setting cloud-spanner=true in profile "addons-467441"
	I1216 11:14:07.957719 1138702 addons.go:234] Setting addon cloud-spanner=true in "addons-467441"
	I1216 11:14:07.957782 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.958484 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.965451 1138702 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-467441"
	I1216 11:14:07.965581 1138702 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-467441"
	I1216 11:14:07.965640 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.966533 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.974831 1138702 addons.go:69] Setting registry=true in profile "addons-467441"
	I1216 11:14:07.974877 1138702 addons.go:234] Setting addon registry=true in "addons-467441"
	I1216 11:14:07.974916 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.975493 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.992730 1138702 addons.go:69] Setting storage-provisioner=true in profile "addons-467441"
	I1216 11:14:07.992789 1138702 addons.go:69] Setting gcp-auth=true in profile "addons-467441"
	I1216 11:14:07.992785 1138702 addons.go:234] Setting addon storage-provisioner=true in "addons-467441"
	I1216 11:14:07.992812 1138702 mustload.go:65] Loading cluster: addons-467441
	I1216 11:14:07.992842 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.993043 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:14:07.993338 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.993382 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.992730 1138702 addons.go:69] Setting default-storageclass=true in profile "addons-467441"
	I1216 11:14:08.012852 1138702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-467441"
	I1216 11:14:08.013274 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.015589 1138702 addons.go:69] Setting ingress=true in profile "addons-467441"
	I1216 11:14:08.015623 1138702 addons.go:234] Setting addon ingress=true in "addons-467441"
	I1216 11:14:08.015681 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.016267 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.022651 1138702 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-467441"
	I1216 11:14:08.022688 1138702 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-467441"
	I1216 11:14:08.023125 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.037400 1138702 addons.go:69] Setting ingress-dns=true in profile "addons-467441"
	I1216 11:14:08.037484 1138702 addons.go:234] Setting addon ingress-dns=true in "addons-467441"
	I1216 11:14:08.037563 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.038216 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.038469 1138702 addons.go:69] Setting volcano=true in profile "addons-467441"
	I1216 11:14:08.038514 1138702 addons.go:234] Setting addon volcano=true in "addons-467441"
	I1216 11:14:08.038583 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.044887 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.057675 1138702 out.go:177] * Verifying Kubernetes components...
	I1216 11:14:08.078268 1138702 addons.go:69] Setting volumesnapshots=true in profile "addons-467441"
	I1216 11:14:08.078349 1138702 addons.go:234] Setting addon volumesnapshots=true in "addons-467441"
	I1216 11:14:08.078417 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.079059 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.100212 1138702 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 11:14:08.103925 1138702 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 11:14:08.104176 1138702 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 11:14:08.104327 1138702 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 11:14:08.104341 1138702 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 11:14:08.104420 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.140346 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:14:08.162833 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 11:14:08.162913 1138702 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 11:14:08.163019 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.173638 1138702 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 11:14:08.173883 1138702 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 11:14:08.175678 1138702 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-467441"
	I1216 11:14:08.175719 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.176141 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.185166 1138702 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 11:14:08.185196 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 11:14:08.185257 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.188652 1138702 addons.go:234] Setting addon default-storageclass=true in "addons-467441"
	I1216 11:14:08.188689 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.189196 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.200220 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 11:14:08.202864 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1216 11:14:08.203073 1138702 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 11:14:08.203234 1138702 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 11:14:08.216599 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:14:08.217777 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.216845 1138702 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 11:14:08.225313 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 11:14:08.225397 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.225784 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 11:14:08.226083 1138702 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 11:14:08.226096 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 11:14:08.226145 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.216855 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 11:14:08.237325 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 11:14:08.237356 1138702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 11:14:08.237428 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.237869 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 11:14:08.238154 1138702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:14:08.238167 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:14:08.238216 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.257180 1138702 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 11:14:08.257205 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 11:14:08.257268 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.266751 1138702 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 11:14:08.270797 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 11:14:08.270825 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 11:14:08.270895 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.300098 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 11:14:08.301032 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 11:14:08.301543 1138702 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 11:14:08.301622 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.301279 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.319412 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:08.321322 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 11:14:08.323869 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.325468 1138702 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 11:14:08.331750 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:08.334667 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 11:14:08.335033 1138702 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 11:14:08.335050 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 11:14:08.335120 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.335268 1138702 out.go:177]   - Using image docker.io/busybox:stable
	I1216 11:14:08.357358 1138702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 11:14:08.357430 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 11:14:08.357518 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.379856 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 11:14:08.388816 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 11:14:08.395531 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 11:14:08.400574 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 11:14:08.402758 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.407112 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 11:14:08.407136 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 11:14:08.407276 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.408507 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.456879 1138702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:14:08.456910 1138702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:14:08.456980 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.465230 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.504865 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.512026 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.513662 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.521397 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.538441 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.542778 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.554380 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.564863 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.583078 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.686112 1138702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:14:08.686251 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 11:14:08.744840 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 11:14:08.744862 1138702 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 11:14:08.749105 1138702 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 11:14:08.749129 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 11:14:08.850788 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 11:14:08.876207 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:14:08.885603 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 11:14:08.885628 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 11:14:08.903004 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 11:14:08.935617 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 11:14:08.938881 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 11:14:08.959698 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 11:14:08.959723 1138702 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 11:14:08.963296 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 11:14:08.971173 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:14:08.973111 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 11:14:09.007253 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 11:14:09.007281 1138702 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 11:14:09.012005 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 11:14:09.012966 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 11:14:09.012991 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 11:14:09.028941 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 11:14:09.028969 1138702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 11:14:09.088256 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 11:14:09.088288 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 11:14:09.123855 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 11:14:09.123881 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 11:14:09.127798 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 11:14:09.127823 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 11:14:09.152282 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:14:09.152306 1138702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 11:14:09.242685 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 11:14:09.242710 1138702 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 11:14:09.283462 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 11:14:09.283488 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 11:14:09.335015 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 11:14:09.338289 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:14:09.378205 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 11:14:09.378234 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 11:14:09.444630 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 11:14:09.444653 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 11:14:09.451120 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 11:14:09.451145 1138702 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 11:14:09.594916 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 11:14:09.594941 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 11:14:09.678034 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 11:14:09.683862 1138702 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:09.683891 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 11:14:09.778471 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 11:14:09.778499 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 11:14:09.803992 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:09.880767 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 11:14:09.880792 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 11:14:09.956537 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 11:14:09.956562 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 11:14:10.018376 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 11:14:10.018403 1138702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 11:14:10.062590 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 11:14:10.062619 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 11:14:10.085609 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 11:14:10.085635 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 11:14:10.110565 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 11:14:10.110592 1138702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 11:14:10.171033 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 11:14:11.484788 1138702 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.798492057s)
	I1216 11:14:11.484820 1138702 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 11:14:11.484877 1138702 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.798694202s)
	I1216 11:14:11.485748 1138702 node_ready.go:35] waiting up to 6m0s for node "addons-467441" to be "Ready" ...
	I1216 11:14:11.486783 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.635968876s)
	I1216 11:14:12.285434 1138702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-467441" context rescaled to 1 replicas
	I1216 11:14:13.567537 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:13.914143 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.037894583s)
	I1216 11:14:13.914256 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.011226541s)
	I1216 11:14:13.914324 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.978684161s)
	I1216 11:14:13.914384 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.975482297s)
	I1216 11:14:14.240274 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.276941975s)
	I1216 11:14:14.240534 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.269336931s)
	W1216 11:14:14.383405 1138702 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 11:14:14.510566 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.537418295s)
	I1216 11:14:15.282762 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.947705237s)
	I1216 11:14:15.282836 1138702 addons.go:475] Verifying addon registry=true in "addons-467441"
	I1216 11:14:15.282971 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.270939336s)
	I1216 11:14:15.283015 1138702 addons.go:475] Verifying addon ingress=true in "addons-467441"
	I1216 11:14:15.283396 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.945076256s)
	I1216 11:14:15.283415 1138702 addons.go:475] Verifying addon metrics-server=true in "addons-467441"
	I1216 11:14:15.283465 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.605405433s)
	I1216 11:14:15.283826 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.47979848s)
	W1216 11:14:15.283992 1138702 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 11:14:15.284017 1138702 retry.go:31] will retry after 186.02248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 11:14:15.285935 1138702 out.go:177] * Verifying registry addon...
	I1216 11:14:15.285940 1138702 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-467441 service yakd-dashboard -n yakd-dashboard
	
	I1216 11:14:15.285959 1138702 out.go:177] * Verifying ingress addon...
	I1216 11:14:15.289916 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 11:14:15.290865 1138702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 11:14:15.297822 1138702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 11:14:15.297862 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:15.299007 1138702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 11:14:15.299030 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:15.470435 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:15.524625 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.35353729s)
	I1216 11:14:15.524660 1138702 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-467441"
	I1216 11:14:15.527837 1138702 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 11:14:15.531575 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 11:14:15.544943 1138702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 11:14:15.544968 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:15.794949 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:15.796835 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:15.989506 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:16.036296 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:16.293827 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:16.298528 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:16.535741 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:16.794219 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:16.795700 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.037856 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:17.294981 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:17.295796 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.535576 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:17.794463 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:17.795638 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.989594 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:18.036334 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:18.218078 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.747595273s)
	I1216 11:14:18.294735 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:18.295902 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:18.456908 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 11:14:18.457012 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:18.473998 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:18.537544 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:18.582238 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 11:14:18.601280 1138702 addons.go:234] Setting addon gcp-auth=true in "addons-467441"
	I1216 11:14:18.601381 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:18.601887 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:18.620078 1138702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 11:14:18.620137 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:18.637921 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:18.751762 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:18.754744 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 11:14:18.757511 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 11:14:18.757539 1138702 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 11:14:18.776541 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 11:14:18.776565 1138702 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 11:14:18.794290 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:18.795692 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:18.797811 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 11:14:18.797833 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 11:14:18.816248 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 11:14:19.035615 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:19.310075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:19.311054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:19.370119 1138702 addons.go:475] Verifying addon gcp-auth=true in "addons-467441"
	I1216 11:14:19.373404 1138702 out.go:177] * Verifying gcp-auth addon...
	I1216 11:14:19.377256 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 11:14:19.408651 1138702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 11:14:19.408676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:19.536190 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:19.794385 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:19.794888 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:19.880997 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:19.990098 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:20.035742 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:20.293122 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:20.295275 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:20.380577 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:20.535549 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:20.793310 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:20.795442 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:20.880779 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:21.035722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:21.293868 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:21.295689 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:21.381073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:21.535172 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:21.793702 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:21.794391 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:21.881359 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:22.035502 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:22.295239 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:22.296274 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:22.395316 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:22.489159 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:22.535906 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:22.794639 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:22.794840 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:22.880804 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:23.035047 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:23.294478 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:23.295294 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:23.380576 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:23.535745 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:23.798956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:23.799446 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:23.881205 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:24.035898 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:24.294139 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:24.295074 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:24.380932 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:24.489638 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:24.535566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:24.793420 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:24.795512 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:24.880891 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:25.035562 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:25.294712 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:25.295470 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:25.381080 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:25.536051 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:25.794486 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:25.795408 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:25.881019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.035571 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:26.293744 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:26.295029 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:26.381003 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.535379 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:26.793291 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:26.795588 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:26.881158 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.989524 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:27.035329 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:27.294569 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:27.295114 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:27.381547 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:27.535702 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:27.793317 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:27.795325 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:27.891640 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.035637 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:28.293796 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:28.294865 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:28.381229 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.535506 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:28.794563 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:28.795597 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:28.881109 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.989920 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:29.035964 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:29.294498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:29.295652 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:29.380981 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:29.535402 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:29.793987 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:29.795358 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:29.880554 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:30.037199 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:30.294662 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:30.295610 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:30.381042 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:30.536332 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:30.794138 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:30.795097 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:30.880207 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:31.035991 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:31.294055 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:31.295197 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:31.380834 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:31.489210 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:31.535645 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:31.792929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:31.794668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:31.881238 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:32.036012 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:32.294913 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:32.295799 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:32.395671 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:32.536292 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:32.794974 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:32.795182 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:32.881175 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:33.035549 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:33.294979 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:33.295569 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:33.381242 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:33.489586 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:33.535671 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:33.793560 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:33.794893 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:33.881064 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:34.036489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:34.294625 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:34.295526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:34.381093 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:34.534936 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:34.795852 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:34.796056 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:34.881073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.035676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:35.294697 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:35.296112 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:35.380477 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.535097 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:35.794050 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:35.794620 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:35.881356 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.989499 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:36.034970 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:36.294285 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:36.295526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:36.381334 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:36.535407 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:36.795018 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:36.795249 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:36.880449 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.035221 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:37.293895 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:37.294849 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:37.380901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.534998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:37.793286 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:37.794974 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:37.881110 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.991490 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:38.035128 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:38.293624 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:38.294809 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:38.381355 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:38.535722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:38.792783 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:38.794569 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:38.880605 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:39.035180 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:39.293986 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:39.294442 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:39.380909 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:39.535108 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:39.794331 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:39.795054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:39.881050 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:40.036169 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:40.294483 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:40.295218 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:40.380498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:40.489301 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:40.535664 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:40.793458 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:40.795165 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:40.881187 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:41.035625 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:41.292951 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:41.294218 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:41.380098 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:41.535041 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:41.794132 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:41.794988 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:41.880324 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.035861 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:42.293273 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:42.294386 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:42.381844 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.535945 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:42.794968 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:42.795150 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:42.881058 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.989103 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:43.035309 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:43.293934 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:43.295712 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:43.380994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:43.534897 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:43.793218 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:43.794546 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:43.880740 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.035870 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:44.294145 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:44.295120 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:44.380612 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.535498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:44.793501 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:44.795240 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:44.880341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.989906 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:45.038676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:45.293501 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:45.295624 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:45.381084 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:45.535038 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:45.793611 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:45.795196 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:45.883445 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:46.035179 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:46.294480 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:46.294740 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:46.380863 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:46.535033 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:46.793541 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:46.796252 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:46.880466 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:47.035019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:47.294172 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:47.295203 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:47.380622 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:47.488823 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:47.535917 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:47.792839 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:47.794582 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:47.880791 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:48.035593 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:48.293652 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:48.295457 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:48.380634 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:48.534797 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:48.792824 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:48.794537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:48.880689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:49.035679 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:49.292998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:49.295152 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:49.380433 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:49.489580 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:49.535043 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:49.794767 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:49.795558 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:49.880647 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:50.035422 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:50.294416 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:50.295001 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:50.380437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:50.535771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:50.793004 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:50.794062 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:50.880924 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.034973 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:51.293293 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:51.295250 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:51.380529 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.535004 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:51.793910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:51.794966 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:51.881121 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.989330 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:52.035670 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:52.292865 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:52.294985 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:52.381442 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:52.535429 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:52.794297 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:52.795169 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:52.880113 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:53.035297 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:53.294488 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:53.294919 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:53.380924 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:53.535413 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:53.794655 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:53.795713 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:53.880829 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.035516 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:54.294075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:54.295060 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:54.381317 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.489575 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:54.535133 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:54.793573 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:54.795801 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:54.880882 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.998049 1138702 node_ready.go:49] node "addons-467441" has status "Ready":"True"
	I1216 11:14:54.998086 1138702 node_ready.go:38] duration metric: took 43.512303411s for node "addons-467441" to be "Ready" ...
	I1216 11:14:54.998098 1138702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:14:55.027451 1138702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:55.054813 1138702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 11:14:55.054840 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:55.331317 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:55.334471 1138702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 11:14:55.334497 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:55.382339 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:55.539207 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:55.803261 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:55.804552 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:55.899919 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:56.037804 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:56.296668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:56.298057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:56.395666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:56.537030 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:56.825095 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:56.835307 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:56.881722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.034782 1138702 pod_ready.go:103] pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace has status "Ready":"False"
	I1216 11:14:57.046598 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:57.295426 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:57.296501 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:57.381689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.574251 1138702 pod_ready.go:93] pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.574321 1138702 pod_ready.go:82] duration metric: took 2.546830061s for pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.574356 1138702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.578878 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:57.595107 1138702 pod_ready.go:93] pod "etcd-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.595181 1138702 pod_ready.go:82] duration metric: took 20.803647ms for pod "etcd-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.595214 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.602065 1138702 pod_ready.go:93] pod "kube-apiserver-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.602142 1138702 pod_ready.go:82] duration metric: took 6.889222ms for pod "kube-apiserver-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.602169 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.608217 1138702 pod_ready.go:93] pod "kube-controller-manager-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.608293 1138702 pod_ready.go:82] duration metric: took 6.10164ms for pod "kube-controller-manager-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.608324 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pss99" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.614680 1138702 pod_ready.go:93] pod "kube-proxy-pss99" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.614758 1138702 pod_ready.go:82] duration metric: took 6.413132ms for pod "kube-proxy-pss99" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.614786 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.797116 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:57.799256 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:57.881646 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.933329 1138702 pod_ready.go:93] pod "kube-scheduler-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.933450 1138702 pod_ready.go:82] duration metric: took 318.643109ms for pod "kube-scheduler-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.933498 1138702 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:58.039890 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:58.300828 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:58.302807 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:58.382630 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:58.539956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:58.799094 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:58.800099 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:58.883296 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.037746 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:59.314933 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:59.315512 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:59.381597 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.538657 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:59.794074 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:59.799435 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:59.880889 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.941116 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:00.038910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:00.310075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:00.311192 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:00.382171 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:00.536255 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:00.796247 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:00.796912 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:00.883439 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.035921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:01.294863 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:01.297997 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:01.381750 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.538002 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:01.796568 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:01.797950 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:01.882260 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.944474 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:02.041425 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:02.293900 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:02.298875 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:02.386246 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:02.536475 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:02.797452 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:02.797831 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:02.882881 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.038405 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:03.295423 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:03.309405 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:03.381909 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.537428 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:03.793713 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:03.797385 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:03.880929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.951510 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:04.042925 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:04.296038 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:04.301152 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:04.382044 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:04.537630 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:04.796970 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:04.800405 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:04.889437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:05.036392 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:05.296972 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:05.298565 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:05.380980 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:05.538881 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:05.796187 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:05.796487 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:05.880524 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:06.036629 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:06.297309 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:06.298378 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:06.397394 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:06.440330 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:06.538520 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:06.796783 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:06.798272 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:06.880452 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:07.036689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:07.294718 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:07.297253 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:07.385910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:07.537417 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:07.794254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:07.796945 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:07.881170 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:08.036892 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:08.293935 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:08.296386 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:08.381277 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:08.440684 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:08.536921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:08.795833 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:08.796434 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:08.881151 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:09.036312 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:09.299063 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:09.301629 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:09.381045 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:09.537566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:09.795200 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:09.797904 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:09.881188 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.038640 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:10.304665 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:10.305125 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:10.381907 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.536285 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:10.797196 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:10.798912 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:10.883387 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.948823 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:11.037849 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:11.298301 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:11.299797 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:11.381024 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:11.539937 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:11.797315 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:11.797940 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:11.880535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:12.036972 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:12.295913 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:12.296967 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:12.396235 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:12.536818 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:12.795496 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:12.796664 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:12.881982 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:13.038513 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:13.298608 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:13.307439 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:13.387300 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:13.442943 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:13.538421 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:13.796007 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:13.797402 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:13.880901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:14.040489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:14.294073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:14.296744 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:14.381604 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:14.536580 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:14.795409 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:14.796373 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:14.880647 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:15.037371 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:15.294093 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:15.296224 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:15.380739 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:15.447680 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:15.536586 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:15.796905 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:15.798355 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:15.881200 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:16.036889 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:16.294964 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:16.297056 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:16.395838 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:16.537198 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:16.795481 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:16.796463 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:16.880955 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.036254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:17.294180 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:17.297726 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:17.381609 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.537437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:17.804581 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:17.805813 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:17.881308 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.943557 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:18.037911 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:18.295370 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:18.296070 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:18.380745 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:18.537126 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:18.797201 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:18.797885 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:18.880463 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:19.036838 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:19.295960 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:19.296808 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:19.381987 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:19.538974 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:19.795392 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:19.797231 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:19.881950 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:20.038770 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:20.305409 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:20.306690 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:20.404355 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:20.440658 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:20.536936 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:20.795867 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:20.796475 1138702 kapi.go:107] duration metric: took 1m5.506563678s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 11:15:20.881316 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:21.037341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:21.295356 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:21.380626 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:21.537240 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:21.795539 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:21.881166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:22.037784 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:22.296637 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:22.397577 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:22.441473 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:22.537018 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:22.798272 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:22.887057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:23.036551 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:23.303882 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:23.382374 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:23.537681 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:23.798330 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:23.881199 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.038454 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:24.297739 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:24.382657 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.539902 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:24.796464 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:24.882348 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.943923 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:25.038363 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:25.295488 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:25.381177 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:25.536658 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:25.795497 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:25.880809 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:26.038107 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:26.295980 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:26.381314 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:26.537216 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:26.796176 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:26.881680 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:27.039085 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:27.298101 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:27.389214 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:27.440482 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:27.537818 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:27.797657 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:27.882323 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:28.037208 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:28.297212 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:28.381559 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:28.539771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:28.797628 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:28.881532 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.037383 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:29.296727 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:29.396934 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.537289 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:29.796179 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:29.881354 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.939953 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:30.037726 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:30.296277 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:30.380354 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:30.537797 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:30.796093 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:30.881340 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.037009 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:31.296087 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:31.381261 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.539140 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:31.796298 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:31.880918 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.940776 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:32.037341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:32.296249 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:32.396091 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:32.537166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:32.795935 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:32.880993 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:33.037356 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:33.295590 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:33.383268 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:33.537666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:33.796370 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:33.881696 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:34.037760 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:34.301645 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:34.382899 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:34.442676 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:34.536880 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:34.795920 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:34.881737 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:35.040083 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:35.296668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:35.382143 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:35.537488 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:35.796514 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:35.881777 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.038231 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:36.298485 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:36.380998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.538956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:36.818546 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:36.881326 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.940527 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:37.039621 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:37.296260 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:37.382601 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:37.537550 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:37.799509 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:37.880826 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.036605 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:38.294651 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:38.381165 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.538476 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:38.796949 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:38.881812 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.943956 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:39.036663 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:39.295566 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:39.381377 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:39.540728 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:39.796177 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:39.899467 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.045219 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:40.304233 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:40.420699 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.541400 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:40.800182 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:40.883853 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.946615 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:41.037064 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:41.296509 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:41.382377 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:41.538489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:41.796948 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:41.896520 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:42.039921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:42.315721 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:42.389401 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:42.545032 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:42.797238 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:42.881973 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:43.039477 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:43.302110 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:43.403197 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:43.441852 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:43.536891 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:43.795382 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:43.881254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:44.038323 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:44.295397 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:44.381226 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:44.536995 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:44.795394 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:44.881551 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.038206 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:45.298999 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:45.382929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.538103 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:45.796096 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:45.882138 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.943877 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:46.038115 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:46.295573 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:46.386901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:46.538304 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:46.796136 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:46.881584 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:47.039273 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:47.295650 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:47.381255 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:47.536498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:47.795760 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:47.886988 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:48.037771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:48.296701 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:48.396453 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:48.440408 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:48.536523 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:48.796032 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:48.890028 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:49.039589 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:49.295604 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:49.381123 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:49.536002 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:49.795163 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:49.884910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.037679 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:50.296267 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:50.383535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.539994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:50.796127 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:50.883077 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.958697 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:51.042557 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:51.299537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:51.380847 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:51.538097 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:51.796179 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:51.880507 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:52.073481 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:52.296722 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:52.383232 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:52.537566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:52.797809 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:52.884834 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:53.038155 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:53.298163 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:53.381850 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:53.449419 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:53.537944 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:53.805530 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:53.880944 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:54.037795 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:54.296293 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:54.382362 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:54.541690 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:54.798720 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:54.881115 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.037146 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:55.295849 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:55.380984 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.536320 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:55.794966 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:55.881707 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.946868 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:56.037795 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:56.296054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:56.381134 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:56.545292 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:56.795504 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:56.882084 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:57.037156 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:57.295110 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:57.384092 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:57.537386 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:57.795121 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:57.881311 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:58.036681 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:58.295691 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:58.382019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:58.442145 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:58.537476 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:58.796804 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:58.895957 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:59.036195 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:59.295700 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:59.381179 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:59.536994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:59.796478 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:59.881730 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:00.054607 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:00.303526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:00.387806 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:00.449927 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:00.537429 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:00.796367 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:00.880558 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:01.037768 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:01.296220 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:01.383390 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:01.547166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:01.796153 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:01.886236 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.037623 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:02.295740 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:02.381074 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.536250 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:02.795404 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:02.880524 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.940258 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:03.037092 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:03.296633 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:03.381583 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:03.537464 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:03.796971 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:03.882659 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.037535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:04.296728 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:04.381666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.543161 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:04.795831 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:04.880877 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.941799 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:05.044802 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:05.296456 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:05.395456 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:05.537058 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:05.795554 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:05.881194 1138702 kapi.go:107] duration metric: took 1m46.503939327s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 11:16:05.884209 1138702 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-467441 cluster.
	I1216 11:16:05.887009 1138702 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 11:16:05.889899 1138702 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 11:16:06.040710 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:06.295656 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:06.538627 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:06.797356 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:07.036771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:07.295551 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:07.439542 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:07.536560 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:07.795950 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:08.036131 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:08.296513 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:08.538057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:08.796312 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:09.036726 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:09.295208 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:09.441323 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:09.541260 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:09.796537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:10.038183 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:10.295618 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:10.541755 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:10.796300 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.037535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:11.296846 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.539268 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:11.798839 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.941747 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:12.038056 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:12.296583 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:12.537623 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:12.795704 1138702 kapi.go:107] duration metric: took 1m57.504832911s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 11:16:13.038216 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:13.545692 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:13.942230 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:14.038673 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:14.536643 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:15.037285 1138702 kapi.go:107] duration metric: took 1m59.505703722s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 11:16:15.040657 1138702 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1216 11:16:15.043733 1138702 addons.go:510] duration metric: took 2m7.111533733s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1216 11:16:16.440345 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:18.440594 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:20.940416 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:23.439645 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:25.440268 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:27.940054 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:29.940352 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:31.943464 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:34.439754 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:36.940804 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:39.439610 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:41.940885 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:44.440459 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:45.939607 1138702 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"True"
	I1216 11:16:45.939637 1138702 pod_ready.go:82] duration metric: took 1m48.006110092s for pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.939653 1138702 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.945201 1138702 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace has status "Ready":"True"
	I1216 11:16:45.945226 1138702 pod_ready.go:82] duration metric: took 5.565595ms for pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.945268 1138702 pod_ready.go:39] duration metric: took 1m50.947157506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:16:45.945291 1138702 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:16:45.945322 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:16:45.945412 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:16:46.001172 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:46.001211 1138702 cri.go:89] found id: ""
	I1216 11:16:46.001220 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:16:46.001279 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.008141 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:16:46.008223 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:16:46.052702 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:46.052725 1138702 cri.go:89] found id: ""
	I1216 11:16:46.052738 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:16:46.052852 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.056246 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:16:46.056325 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:16:46.097539 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:46.097565 1138702 cri.go:89] found id: ""
	I1216 11:16:46.097573 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:16:46.097632 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.101329 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:16:46.101403 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:16:46.141737 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:46.141761 1138702 cri.go:89] found id: ""
	I1216 11:16:46.141770 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:16:46.141847 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.145455 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:16:46.145559 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:16:46.183461 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:46.183481 1138702 cri.go:89] found id: ""
	I1216 11:16:46.183489 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:16:46.183544 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.187103 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:16:46.187180 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:16:46.227346 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:46.227419 1138702 cri.go:89] found id: ""
	I1216 11:16:46.227441 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:16:46.227533 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.231115 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:16:46.231191 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:16:46.271789 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:46.271814 1138702 cri.go:89] found id: ""
	I1216 11:16:46.271823 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:16:46.271884 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.275339 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:16:46.275363 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:46.345234 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:16:46.345265 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:46.391849 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:16:46.391879 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:46.429861 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:16:46.429892 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:16:46.520822 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:16:46.520863 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:16:46.602564 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603029 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.603289 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603541 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.603740 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603923 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.604168 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.604396 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:46.650517 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:16:46.650558 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:16:46.672279 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:16:46.672315 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:16:46.870402 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:16:46.870434 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:46.934396 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:16:46.934475 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:46.982588 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:16:46.982620 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:47.074457 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:16:47.074491 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:47.119447 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:16:47.119477 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:16:47.180622 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:47.180650 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:16:47.180709 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:16:47.180724 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:47.180737 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:47.180817 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:47.180828 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:47.180834 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:47.181006 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:47.181016 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:16:57.181906 1138702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:16:57.195943 1138702 api_server.go:72] duration metric: took 2m49.263992573s to wait for apiserver process to appear ...
	I1216 11:16:57.195970 1138702 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:16:57.196006 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:16:57.196067 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:16:57.236415 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:57.236445 1138702 cri.go:89] found id: ""
	I1216 11:16:57.236453 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:16:57.236509 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.240157 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:16:57.240233 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:16:57.281957 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:57.281980 1138702 cri.go:89] found id: ""
	I1216 11:16:57.281988 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:16:57.282045 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.285452 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:16:57.285526 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:16:57.324812 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:57.324833 1138702 cri.go:89] found id: ""
	I1216 11:16:57.324842 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:16:57.324903 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.328554 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:16:57.328649 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:16:57.367780 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:57.367804 1138702 cri.go:89] found id: ""
	I1216 11:16:57.367812 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:16:57.367873 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.372459 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:16:57.372532 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:16:57.416074 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:57.416098 1138702 cri.go:89] found id: ""
	I1216 11:16:57.416106 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:16:57.416163 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.420351 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:16:57.420428 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:16:57.460422 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:57.460442 1138702 cri.go:89] found id: ""
	I1216 11:16:57.460450 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:16:57.460506 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.464240 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:16:57.464317 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:16:57.509878 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:57.509903 1138702 cri.go:89] found id: ""
	I1216 11:16:57.509927 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:16:57.509990 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.514082 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:16:57.514148 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:57.569358 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:16:57.569392 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:57.617759 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:16:57.617798 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:16:57.700328 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.700603 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.700803 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701034 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.701213 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701382 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701596 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.701820 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:57.743823 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:16:57.743856 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:16:57.760967 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:16:57.760998 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:16:57.905525 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:16:57.905557 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:57.994161 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:16:57.994201 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:58.040425 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:16:58.040455 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:16:58.137749 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:16:58.137788 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:16:58.201503 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:16:58.201535 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:58.259240 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:16:58.259280 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:58.305035 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:16:58.305064 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:58.342723 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:58.342749 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:16:58.342800 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:16:58.342816 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:58.342823 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:58.342833 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:58.342843 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:58.342851 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:58.342860 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:58.342867 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:17:08.343706 1138702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 11:17:08.353682 1138702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 11:17:08.354760 1138702 api_server.go:141] control plane version: v1.31.2
	I1216 11:17:08.354788 1138702 api_server.go:131] duration metric: took 11.158809876s to wait for apiserver health ...
	I1216 11:17:08.354797 1138702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:17:08.354818 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:17:08.354886 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:17:08.401145 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:17:08.401166 1138702 cri.go:89] found id: ""
	I1216 11:17:08.401175 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:17:08.401232 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.405646 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:17:08.405771 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:17:08.446571 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:17:08.446606 1138702 cri.go:89] found id: ""
	I1216 11:17:08.446615 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:17:08.446689 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.450258 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:17:08.450339 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:17:08.487771 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:17:08.487796 1138702 cri.go:89] found id: ""
	I1216 11:17:08.487805 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:17:08.487863 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.493160 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:17:08.493244 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:17:08.535149 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:17:08.535184 1138702 cri.go:89] found id: ""
	I1216 11:17:08.535193 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:17:08.535275 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.539036 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:17:08.539113 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:17:08.579518 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:17:08.579541 1138702 cri.go:89] found id: ""
	I1216 11:17:08.579552 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:17:08.579609 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.583275 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:17:08.583352 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:17:08.634675 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:17:08.634697 1138702 cri.go:89] found id: ""
	I1216 11:17:08.634706 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:17:08.634781 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.638226 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:17:08.638296 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:17:08.688309 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:17:08.688332 1138702 cri.go:89] found id: ""
	I1216 11:17:08.688341 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:17:08.688403 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.692015 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:17:08.692042 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:17:08.791592 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:17:08.791628 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:17:08.914178 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:17:08.914254 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:17:08.999128 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:17:08.999406 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:08.999628 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:17:08.999859 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.000041 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.000218 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.000428 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.000651 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:17:09.043650 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:17:09.043686 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:17:09.060745 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:17:09.060787 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:17:09.202048 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:17:09.202080 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:17:09.265441 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:17:09.265478 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:17:09.321017 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:17:09.321050 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:17:09.371014 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:17:09.371051 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:17:09.418757 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:17:09.418790 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:17:09.458217 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:17:09.458245 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:17:09.503090 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:17:09.503127 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:17:09.554217 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:17:09.554245 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:17:09.554311 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:17:09.554328 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.554342 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.554352 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.554359 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.554367 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:17:09.554381 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:17:09.554388 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:17:19.566657 1138702 system_pods.go:59] 18 kube-system pods found
	I1216 11:17:19.566695 1138702 system_pods.go:61] "coredns-7c65d6cfc9-q957p" [82d9993b-39f8-4677-b836-c7cd37117b1a] Running
	I1216 11:17:19.566703 1138702 system_pods.go:61] "csi-hostpath-attacher-0" [94788c63-6666-44b4-83c3-7fe1f6ebcaf9] Running
	I1216 11:17:19.566707 1138702 system_pods.go:61] "csi-hostpath-resizer-0" [d21a0e84-86fe-47a9-8ccd-682aa6f7f144] Running
	I1216 11:17:19.567009 1138702 system_pods.go:61] "csi-hostpathplugin-mpt97" [339d76af-f10c-43ce-b0ea-7ae551d3d1a5] Running
	I1216 11:17:19.567021 1138702 system_pods.go:61] "etcd-addons-467441" [f49ade91-d760-43f8-ae39-79b39c0e47a4] Running
	I1216 11:17:19.567026 1138702 system_pods.go:61] "kindnet-xpdrb" [58cae89f-628b-407b-8d36-12e7fdd1244d] Running
	I1216 11:17:19.567032 1138702 system_pods.go:61] "kube-apiserver-addons-467441" [6ce3401b-45d9-4799-9597-95b79f51b386] Running
	I1216 11:17:19.567037 1138702 system_pods.go:61] "kube-controller-manager-addons-467441" [5e691c63-7bec-42d1-bc95-28f433f30b4a] Running
	I1216 11:17:19.567041 1138702 system_pods.go:61] "kube-ingress-dns-minikube" [4350d794-5394-4140-8327-30f5a49dfb05] Running
	I1216 11:17:19.567045 1138702 system_pods.go:61] "kube-proxy-pss99" [ea376084-34b2-4d86-955d-27196e1014e6] Running
	I1216 11:17:19.567048 1138702 system_pods.go:61] "kube-scheduler-addons-467441" [48844348-8139-488d-85ae-7138147160bb] Running
	I1216 11:17:19.567052 1138702 system_pods.go:61] "metrics-server-84c5f94fbc-vwzrq" [702d35be-9a96-4ad2-b0dd-6e3c9ff3d4aa] Running
	I1216 11:17:19.567056 1138702 system_pods.go:61] "nvidia-device-plugin-daemonset-zh27s" [29ad869e-9aed-4717-ab7c-b8ba4cf3c784] Running
	I1216 11:17:19.567060 1138702 system_pods.go:61] "registry-5cc95cd69-f5zh4" [e511e988-2365-410f-8684-de95a39675bf] Running
	I1216 11:17:19.567083 1138702 system_pods.go:61] "registry-proxy-x5969" [ebb5d950-3c97-4dff-b737-8817d4630dcc] Running
	I1216 11:17:19.567090 1138702 system_pods.go:61] "snapshot-controller-56fcc65765-45zdj" [728c876b-c4ad-4289-b1f4-8310ca8a70c6] Running
	I1216 11:17:19.567094 1138702 system_pods.go:61] "snapshot-controller-56fcc65765-smdxv" [feffc5a8-04d1-4848-ba68-da3e522bc18f] Running
	I1216 11:17:19.567097 1138702 system_pods.go:61] "storage-provisioner" [943dc892-0be5-4c97-8093-d793a3d09c44] Running
	I1216 11:17:19.567103 1138702 system_pods.go:74] duration metric: took 11.212300961s to wait for pod list to return data ...
	I1216 11:17:19.567112 1138702 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:17:19.574009 1138702 default_sa.go:45] found service account: "default"
	I1216 11:17:19.574037 1138702 default_sa.go:55] duration metric: took 6.919054ms for default service account to be created ...
	I1216 11:17:19.574048 1138702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 11:17:19.584764 1138702 system_pods.go:86] 18 kube-system pods found
	I1216 11:17:19.584799 1138702 system_pods.go:89] "coredns-7c65d6cfc9-q957p" [82d9993b-39f8-4677-b836-c7cd37117b1a] Running
	I1216 11:17:19.584808 1138702 system_pods.go:89] "csi-hostpath-attacher-0" [94788c63-6666-44b4-83c3-7fe1f6ebcaf9] Running
	I1216 11:17:19.584813 1138702 system_pods.go:89] "csi-hostpath-resizer-0" [d21a0e84-86fe-47a9-8ccd-682aa6f7f144] Running
	I1216 11:17:19.584836 1138702 system_pods.go:89] "csi-hostpathplugin-mpt97" [339d76af-f10c-43ce-b0ea-7ae551d3d1a5] Running
	I1216 11:17:19.584845 1138702 system_pods.go:89] "etcd-addons-467441" [f49ade91-d760-43f8-ae39-79b39c0e47a4] Running
	I1216 11:17:19.584850 1138702 system_pods.go:89] "kindnet-xpdrb" [58cae89f-628b-407b-8d36-12e7fdd1244d] Running
	I1216 11:17:19.584858 1138702 system_pods.go:89] "kube-apiserver-addons-467441" [6ce3401b-45d9-4799-9597-95b79f51b386] Running
	I1216 11:17:19.584863 1138702 system_pods.go:89] "kube-controller-manager-addons-467441" [5e691c63-7bec-42d1-bc95-28f433f30b4a] Running
	I1216 11:17:19.584869 1138702 system_pods.go:89] "kube-ingress-dns-minikube" [4350d794-5394-4140-8327-30f5a49dfb05] Running
	I1216 11:17:19.584873 1138702 system_pods.go:89] "kube-proxy-pss99" [ea376084-34b2-4d86-955d-27196e1014e6] Running
	I1216 11:17:19.584879 1138702 system_pods.go:89] "kube-scheduler-addons-467441" [48844348-8139-488d-85ae-7138147160bb] Running
	I1216 11:17:19.584887 1138702 system_pods.go:89] "metrics-server-84c5f94fbc-vwzrq" [702d35be-9a96-4ad2-b0dd-6e3c9ff3d4aa] Running
	I1216 11:17:19.584891 1138702 system_pods.go:89] "nvidia-device-plugin-daemonset-zh27s" [29ad869e-9aed-4717-ab7c-b8ba4cf3c784] Running
	I1216 11:17:19.584895 1138702 system_pods.go:89] "registry-5cc95cd69-f5zh4" [e511e988-2365-410f-8684-de95a39675bf] Running
	I1216 11:17:19.584911 1138702 system_pods.go:89] "registry-proxy-x5969" [ebb5d950-3c97-4dff-b737-8817d4630dcc] Running
	I1216 11:17:19.584920 1138702 system_pods.go:89] "snapshot-controller-56fcc65765-45zdj" [728c876b-c4ad-4289-b1f4-8310ca8a70c6] Running
	I1216 11:17:19.584926 1138702 system_pods.go:89] "snapshot-controller-56fcc65765-smdxv" [feffc5a8-04d1-4848-ba68-da3e522bc18f] Running
	I1216 11:17:19.584930 1138702 system_pods.go:89] "storage-provisioner" [943dc892-0be5-4c97-8093-d793a3d09c44] Running
	I1216 11:17:19.584948 1138702 system_pods.go:126] duration metric: took 10.894324ms to wait for k8s-apps to be running ...
	I1216 11:17:19.584963 1138702 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 11:17:19.585033 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:17:19.596970 1138702 system_svc.go:56] duration metric: took 11.998812ms WaitForService to wait for kubelet
	I1216 11:17:19.597004 1138702 kubeadm.go:582] duration metric: took 3m11.665061999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:17:19.597025 1138702 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:17:19.600582 1138702 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 11:17:19.600618 1138702 node_conditions.go:123] node cpu capacity is 2
	I1216 11:17:19.600632 1138702 node_conditions.go:105] duration metric: took 3.599207ms to run NodePressure ...
	I1216 11:17:19.600644 1138702 start.go:241] waiting for startup goroutines ...
	I1216 11:17:19.600663 1138702 start.go:246] waiting for cluster config update ...
	I1216 11:17:19.600685 1138702 start.go:255] writing updated cluster config ...
	I1216 11:17:19.601023 1138702 ssh_runner.go:195] Run: rm -f paused
	I1216 11:17:19.993279 1138702 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 11:17:19.996682 1138702 out.go:177] * Done! kubectl is now configured to use "addons-467441" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 11:19:07 addons-467441 crio[984]: time="2024-12-16 11:19:07.284206048Z" level=info msg="Started container" PID=8461 containerID=2a6318d7d7c28c0009c9836c7050bb926b76d80e1c971ababf90775c7a065b31 description=default/nginx/nginx id=81ae1093-615a-459e-82f2-45c31408f9c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=559f175560e1e9d0275a38b31bf7fb6690881fbd615d6bff61aad13fe3456ff3
	Dec 16 11:21:24 addons-467441 crio[984]: time="2024-12-16 11:21:24.976208118Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-5984x/POD" id=99c990dd-9d71-4535-83ab-fad07d4e9b21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 11:21:24 addons-467441 crio[984]: time="2024-12-16 11:21:24.976268088Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.004245861Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-5984x Namespace:default ID:08167ca4d8f536ca6f526cd7f90b47dbcfd744e1780563c8938542c5418a258b UID:f3f29727-9a3b-4636-bf7a-f0e2d0c3ec36 NetNS:/var/run/netns/5704b8dc-3275-4314-944a-34901f18c0ea Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.004291366Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-5984x to CNI network \"kindnet\" (type=ptp)"
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.015948522Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-5984x Namespace:default ID:08167ca4d8f536ca6f526cd7f90b47dbcfd744e1780563c8938542c5418a258b UID:f3f29727-9a3b-4636-bf7a-f0e2d0c3ec36 NetNS:/var/run/netns/5704b8dc-3275-4314-944a-34901f18c0ea Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.016119332Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-5984x for CNI network kindnet (type=ptp)"
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.019665373Z" level=info msg="Ran pod sandbox 08167ca4d8f536ca6f526cd7f90b47dbcfd744e1780563c8938542c5418a258b with infra container: default/hello-world-app-55bf9c44b4-5984x/POD" id=99c990dd-9d71-4535-83ab-fad07d4e9b21 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.022517795Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=4e834531-b983-49b7-8ee2-8df91f406db7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.022783232Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=4e834531-b983-49b7-8ee2-8df91f406db7 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.026331341Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=bf4e88fb-ad1c-4a09-897f-47295797738c name=/runtime.v1.ImageService/PullImage
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.030003376Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 11:21:25 addons-467441 crio[984]: time="2024-12-16 11:21:25.299849641Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.063028159Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=bf4e88fb-ad1c-4a09-897f-47295797738c name=/runtime.v1.ImageService/PullImage
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.063998054Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=2524f05f-5567-480e-919f-035339f66b38 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.064694142Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=2524f05f-5567-480e-919f-035339f66b38 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.066015675Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=f8fd3f6e-ae5b-4d65-9392-c329b39c2ca1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.066957607Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=f8fd3f6e-ae5b-4d65-9392-c329b39c2ca1 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.068366252Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-5984x/hello-world-app" id=a7573399-f12a-4173-b059-7dd9b3db6dcc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.068464965Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.091813361Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/cfc3bf1c0bdc42f7fb6b2c193d5ef87bd5d3a6bcf9304d02563f9bc9b63e903a/merged/etc/passwd: no such file or directory"
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.091999572Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/cfc3bf1c0bdc42f7fb6b2c193d5ef87bd5d3a6bcf9304d02563f9bc9b63e903a/merged/etc/group: no such file or directory"
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.160263228Z" level=info msg="Created container 0ecf3bf56527f64786526ad9c85c65fe2c58ff084d96132d92d9080a98525651: default/hello-world-app-55bf9c44b4-5984x/hello-world-app" id=a7573399-f12a-4173-b059-7dd9b3db6dcc name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.161528368Z" level=info msg="Starting container: 0ecf3bf56527f64786526ad9c85c65fe2c58ff084d96132d92d9080a98525651" id=e5ca9dc9-21a8-42ad-b1ec-6303208611c7 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 11:21:26 addons-467441 crio[984]: time="2024-12-16 11:21:26.173078541Z" level=info msg="Started container" PID=8738 containerID=0ecf3bf56527f64786526ad9c85c65fe2c58ff084d96132d92d9080a98525651 description=default/hello-world-app-55bf9c44b4-5984x/hello-world-app id=e5ca9dc9-21a8-42ad-b1ec-6303208611c7 name=/runtime.v1.RuntimeService/StartContainer sandboxID=08167ca4d8f536ca6f526cd7f90b47dbcfd744e1780563c8938542c5418a258b
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                       ATTEMPT             POD ID              POD
	0ecf3bf56527f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app            0                   08167ca4d8f53       hello-world-app-55bf9c44b4-5984x
	2a6318d7d7c28       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                      0                   559f175560e1e       nginx
	61516daa3bce2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                    0                   94696fa4b7bb6       busybox
	8495e52da0202       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             5 minutes ago            Running             controller                 0                   9a08eee113df7       ingress-nginx-controller-5f85ff4588-bblp5
	2ad27583dea86       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              patch                      0                   de46cb98ee09c       ingress-nginx-admission-patch-ctlcz
	b48f2b12f06ae       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   5 minutes ago            Exited              create                     0                   0e93863e0f0ff       ingress-nginx-admission-create-56cpm
	84929e861c8a7       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner     0                   a2e85f89937f8       local-path-provisioner-86d989889c-vwklx
	0e532871b67d2       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        5 minutes ago            Running             metrics-server             0                   eb86c5ec58d0a       metrics-server-84c5f94fbc-vwzrq
	1a06ec80011d3       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns       0                   bd1b319b98bc3       kube-ingress-dns-minikube
	fdc0dc2ac0239       docker.io/marcnuri/yakd@sha256:1c961556224d57fc747de0b1874524208e5fb4f8386f23e9c1c4c18e97109f17                              5 minutes ago            Running             yakd                       0                   0e6b556703679       yakd-dashboard-67d98fc6b-hp8vn
	73c742742b84c       gcr.io/cloud-spanner-emulator/emulator@sha256:7cf2be1ac85c39a0c5b34185b6c3d0ea479269f5c8ecc785713308f93194ca27               6 minutes ago            Running             cloud-spanner-emulator     0                   9b21a75ee6481       cloud-spanner-emulator-dc5db94f4-6fvnl
	c8a4ce073d932       nvcr.io/nvidia/k8s-device-plugin@sha256:7089559ce6153018806857f5049085bae15b3bf6f1c8bd19d8b12f707d087dea                     6 minutes ago            Running             nvidia-device-plugin-ctr   0                   89bdbd92615d8       nvidia-device-plugin-daemonset-zh27s
	dfc3285cc9ad5       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             6 minutes ago            Running             coredns                    0                   bb6987a88a4a1       coredns-7c65d6cfc9-q957p
	c7daab6e1e325       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago            Running             storage-provisioner        0                   354f75e8a8fe8       storage-provisioner
	7e63609c52637       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                           7 minutes ago            Running             kindnet-cni                0                   0728522c6e7f8       kindnet-xpdrb
	16934743848f4       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                             7 minutes ago            Running             kube-proxy                 0                   40ec3fe26d4a5       kube-proxy-pss99
	2cf6fde1ee7c9       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                             7 minutes ago            Running             kube-controller-manager    0                   01fb0a0cf4715       kube-controller-manager-addons-467441
	1087144bdb483       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                             7 minutes ago            Running             kube-scheduler             0                   c94024647fea1       kube-scheduler-addons-467441
	be2e989c41089       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             7 minutes ago            Running             etcd                       0                   578635ee28890       etcd-addons-467441
	ce98b18aa1ed3       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                             7 minutes ago            Running             kube-apiserver             0                   eb93b0602613e       kube-apiserver-addons-467441
	
	
	==> coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] <==
	[INFO] 10.244.0.3:52395 - 58511 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.001719776s
	[INFO] 10.244.0.3:52395 - 58337 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000070423s
	[INFO] 10.244.0.3:52395 - 23641 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000097466s
	[INFO] 10.244.0.3:50892 - 44426 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000098246s
	[INFO] 10.244.0.3:50892 - 44204 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000159036s
	[INFO] 10.244.0.3:56743 - 49172 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.00006038s
	[INFO] 10.244.0.3:56743 - 48992 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000119103s
	[INFO] 10.244.0.3:34867 - 27016 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00005037s
	[INFO] 10.244.0.3:34867 - 26843 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000072367s
	[INFO] 10.244.0.3:40014 - 12205 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001516598s
	[INFO] 10.244.0.3:40014 - 12000 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001018731s
	[INFO] 10.244.0.3:35247 - 4854 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000062874s
	[INFO] 10.244.0.3:35247 - 5010 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000149707s
	[INFO] 10.244.0.20:37873 - 36527 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000150133s
	[INFO] 10.244.0.20:50809 - 20307 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000410123s
	[INFO] 10.244.0.20:56855 - 19704 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000168086s
	[INFO] 10.244.0.20:41325 - 56410 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000097343s
	[INFO] 10.244.0.20:40138 - 20797 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000137473s
	[INFO] 10.244.0.20:53595 - 58338 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000095956s
	[INFO] 10.244.0.20:38911 - 33440 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.008385959s
	[INFO] 10.244.0.20:33934 - 8882 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.00881405s
	[INFO] 10.244.0.20:54489 - 41746 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 610 0.011060365s
	[INFO] 10.244.0.20:45587 - 20569 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.011315382s
	[INFO] 10.244.0.24:56582 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000192077s
	[INFO] 10.244.0.24:45563 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000140263s
	
	
	==> describe nodes <==
	Name:               addons-467441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-467441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-467441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T11_14_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-467441
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-467441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 11:21:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 11:19:38 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 11:19:38 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 11:19:38 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 11:19:38 +0000   Mon, 16 Dec 2024 11:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-467441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d62e87b8c4ce46d688223c02af9759c3
	  System UUID:                201f706f-2d99-4556-8c0e-c0725ad84842
	  Boot ID:                    4589c027-c057-41f4-bde7-e198f2c36aaf
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (18 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m6s
	  default                     cloud-spanner-emulator-dc5db94f4-6fvnl       0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m14s
	  default                     hello-world-app-55bf9c44b4-5984x             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m22s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-bblp5    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         7m11s
	  kube-system                 coredns-7c65d6cfc9-q957p                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     7m16s
	  kube-system                 etcd-addons-467441                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         7m23s
	  kube-system                 kindnet-xpdrb                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      7m18s
	  kube-system                 kube-apiserver-addons-467441                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 kube-controller-manager-addons-467441        200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m25s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  kube-system                 kube-proxy-pss99                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m17s
	  kube-system                 kube-scheduler-addons-467441                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m23s
	  kube-system                 metrics-server-84c5f94fbc-vwzrq              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         7m13s
	  kube-system                 nvidia-device-plugin-daemonset-zh27s         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m32s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m13s
	  local-path-storage          local-path-provisioner-86d989889c-vwklx      0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m12s
	  yakd-dashboard              yakd-dashboard-67d98fc6b-hp8vn               0 (0%)        0 (0%)      128Mi (1%)       256Mi (3%)     7m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             638Mi (8%)   476Mi (6%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 7m11s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  7m31s (x8 over 7m31s)  kubelet          Node addons-467441 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m31s (x8 over 7m31s)  kubelet          Node addons-467441 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m31s (x7 over 7m31s)  kubelet          Node addons-467441 status is now: NodeHasSufficientPID
	  Normal   Starting                 7m23s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 7m23s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  7m23s                  kubelet          Node addons-467441 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m23s                  kubelet          Node addons-467441 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m23s                  kubelet          Node addons-467441 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           7m19s                  node-controller  Node addons-467441 event: Registered Node addons-467441 in Controller
	  Normal   NodeReady                6m32s                  kubelet          Node addons-467441 status is now: NodeReady
	
	
	==> dmesg <==
	[Dec16 08:54] overlayfs: '/var/lib/containers/storage/overlay/l/7FOWQIVXOWACA56BLQVF4JJOLY' not a directory
	
	
	==> etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] <==
	{"level":"warn","ts":"2024-12-16T11:14:09.541714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.034557Z","time spent":"507.114669ms","remote":"127.0.0.1:35536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3623,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-xpdrb\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-xpdrb\" value_size:3575 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.540681Z","caller":"traceutil/trace.go:171","msg":"trace[1863671281] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"394.972535ms","start":"2024-12-16T11:14:09.145688Z","end":"2024-12-16T11:14:09.540660Z","steps":["trace[1863671281] 'process raft request'  (duration: 310.931457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.544923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.145666Z","time spent":"399.189923ms","remote":"127.0.0.1:35536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3360,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pss99\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pss99\" value_size:3309 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.540832Z","caller":"traceutil/trace.go:171","msg":"trace[1642192314] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"394.764204ms","start":"2024-12-16T11:14:09.146061Z","end":"2024-12-16T11:14:09.540825Z","steps":["trace[1642192314] 'process raft request'  (duration: 310.617676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.604243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.146051Z","time spent":"458.012458ms","remote":"127.0.0.1:35550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":168,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:328 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:120 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >"}
	{"level":"info","ts":"2024-12-16T11:14:09.540851Z","caller":"traceutil/trace.go:171","msg":"trace[1427641176] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"245.175739ms","start":"2024-12-16T11:14:09.295670Z","end":"2024-12-16T11:14:09.540846Z","steps":["trace[1427641176] 'process raft request'  (duration: 161.045572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.557962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.076247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-16T11:14:09.614842Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.295648Z","time spent":"319.145381ms","remote":"127.0.0.1:35446","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":680,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.1811a3ff698090be\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.1811a3ff698090be\" value_size:609 lease:8128033945515139534 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.618278Z","caller":"traceutil/trace.go:171","msg":"trace[1388851801] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:348; }","duration":"322.400996ms","start":"2024-12-16T11:14:09.295862Z","end":"2024-12-16T11:14:09.618263Z","steps":["trace[1388851801] 'agreement among raft nodes before linearized reading'  (duration: 163.632568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.700948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.295841Z","time spent":"405.076134ms","remote":"127.0.0.1:35334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-16T11:14:11.514867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.247397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:11.515186Z","caller":"traceutil/trace.go:171","msg":"trace[1048516907] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:378; }","duration":"163.577111ms","start":"2024-12-16T11:14:11.351594Z","end":"2024-12-16T11:14:11.515171Z","steps":["trace[1048516907] 'agreement among raft nodes before linearized reading'  (duration: 81.470156ms)","trace[1048516907] 'range keys from in-memory index tree'  (duration: 81.669133ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T11:14:11.529175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.245424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:11.529310Z","caller":"traceutil/trace.go:171","msg":"trace[574520632] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:378; }","duration":"203.91795ms","start":"2024-12-16T11:14:11.325374Z","end":"2024-12-16T11:14:11.529292Z","steps":["trace[574520632] 'agreement among raft nodes before linearized reading'  (duration: 107.796445ms)","trace[574520632] 'range keys from in-memory index tree'  (duration: 82.439952ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T11:14:11.529723Z","caller":"traceutil/trace.go:171","msg":"trace[468470555] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"112.694267ms","start":"2024-12-16T11:14:11.417020Z","end":"2024-12-16T11:14:11.529714Z","steps":["trace[468470555] 'process raft request'  (duration: 91.400032ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638245Z","caller":"traceutil/trace.go:171","msg":"trace[1024051239] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"116.165302ms","start":"2024-12-16T11:14:12.522066Z","end":"2024-12-16T11:14:12.638231Z","steps":["trace[1024051239] 'process raft request'  (duration: 36.961552ms)","trace[1024051239] 'compare'  (duration: 78.766617ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T11:14:12.638424Z","caller":"traceutil/trace.go:171","msg":"trace[1493401482] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"116.177282ms","start":"2024-12-16T11:14:12.522240Z","end":"2024-12-16T11:14:12.638418Z","steps":["trace[1493401482] 'process raft request'  (duration: 115.638908ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638509Z","caller":"traceutil/trace.go:171","msg":"trace[576796796] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"116.224887ms","start":"2024-12-16T11:14:12.522279Z","end":"2024-12-16T11:14:12.638504Z","steps":["trace[576796796] 'process raft request'  (duration: 115.631647ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638590Z","caller":"traceutil/trace.go:171","msg":"trace[1982304804] linearizableReadLoop","detail":"{readStateIndex:438; appliedIndex:437; }","duration":"116.392866ms","start":"2024-12-16T11:14:12.522191Z","end":"2024-12-16T11:14:12.638584Z","steps":["trace[1982304804] 'read index received'  (duration: 24.89266ms)","trace[1982304804] 'applied index is now lower than readState.Index'  (duration: 91.499525ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T11:14:12.638826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.619093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:12.638859Z","caller":"traceutil/trace.go:171","msg":"trace[1038037122] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:430; }","duration":"116.664425ms","start":"2024-12-16T11:14:12.522187Z","end":"2024-12-16T11:14:12.638852Z","steps":["trace[1038037122] 'agreement among raft nodes before linearized reading'  (duration: 116.603241ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:12.704649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.540233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:12.704807Z","caller":"traceutil/trace.go:171","msg":"trace[1706744554] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:434; }","duration":"138.710887ms","start":"2024-12-16T11:14:12.566084Z","end":"2024-12-16T11:14:12.704795Z","steps":["trace[1706744554] 'agreement among raft nodes before linearized reading'  (duration: 138.416035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:12.705747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.011265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-12-16T11:14:12.705793Z","caller":"traceutil/trace.go:171","msg":"trace[1139679079] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7c65d6cfc9; range_end:; response_count:1; response_revision:434; }","duration":"148.062653ms","start":"2024-12-16T11:14:12.557721Z","end":"2024-12-16T11:14:12.705784Z","steps":["trace[1139679079] 'agreement among raft nodes before linearized reading'  (duration: 147.982647ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:21:26 up  8:03,  0 users,  load average: 0.05, 0.90, 1.84
	Linux addons-467441 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] <==
	I1216 11:19:24.730110       1 main.go:301] handling current node
	I1216 11:19:34.733344       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:19:34.733378       1 main.go:301] handling current node
	I1216 11:19:44.729991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:19:44.730024       1 main.go:301] handling current node
	I1216 11:19:54.731722       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:19:54.731755       1 main.go:301] handling current node
	I1216 11:20:04.732205       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:04.732240       1 main.go:301] handling current node
	I1216 11:20:14.730571       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:14.730603       1 main.go:301] handling current node
	I1216 11:20:24.730794       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:24.730934       1 main.go:301] handling current node
	I1216 11:20:34.734345       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:34.734381       1 main.go:301] handling current node
	I1216 11:20:44.731767       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:44.731802       1 main.go:301] handling current node
	I1216 11:20:54.730020       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:20:54.730157       1 main.go:301] handling current node
	I1216 11:21:04.732027       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:21:04.732135       1 main.go:301] handling current node
	I1216 11:21:14.730173       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:21:14.730205       1 main.go:301] handling current node
	I1216 11:21:24.735112       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:21:24.736879       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] <==
	 > logger="UnhandledError"
	E1216 11:16:45.614066       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.103.49.164:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.103.49.164:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.103.49.164:443: connect: connection refused" logger="UnhandledError"
	I1216 11:16:45.684426       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 11:17:31.034061       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50534: use of closed network connection
	E1216 11:17:31.432998       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50570: use of closed network connection
	I1216 11:17:40.769727       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.30.107"}
	I1216 11:18:25.317489       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 11:18:45.022176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.022992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.076980       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.077200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.106361       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.106821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.170734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.171333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.243047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.243096       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 11:18:46.171574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 11:18:46.244337       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 11:18:46.290025       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 11:18:58.770239       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 11:18:59.910130       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 11:19:04.342254       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 11:19:04.667148       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.106.24"}
	I1216 11:21:24.960568       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.65.168"}
	
	
	==> kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] <==
	W1216 11:20:01.428041       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:01.428093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:20:02.355926       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:02.355970       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:20:08.912557       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:08.912602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:20:24.373360       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:24.373406       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:20:36.715132       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:36.715185       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:20:44.626263       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:20:44.626306       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:21:01.493623       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:21:01.493667       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:21:08.310605       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:21:08.310651       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:21:13.410226       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:21:13.410267       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:21:15.355151       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:21:15.355195       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 11:21:24.673048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="40.767487ms"
	I1216 11:21:24.703433       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="29.357191ms"
	I1216 11:21:24.704949       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="73.869µs"
	I1216 11:21:26.529523       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="9.302732ms"
	I1216 11:21:26.530520       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="52.824µs"
	
	
	==> kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] <==
	I1216 11:14:13.640099       1 server_linux.go:66] "Using iptables proxy"
	I1216 11:14:14.782967       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 11:14:14.783123       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 11:14:15.042773       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 11:14:15.042850       1 server_linux.go:169] "Using iptables Proxier"
	I1216 11:14:15.051817       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 11:14:15.052303       1 server.go:483] "Version info" version="v1.31.2"
	I1216 11:14:15.057948       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:14:15.072587       1 config.go:199] "Starting service config controller"
	I1216 11:14:15.072621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 11:14:15.072652       1 config.go:105] "Starting endpoint slice config controller"
	I1216 11:14:15.072657       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 11:14:15.073080       1 config.go:328] "Starting node config controller"
	I1216 11:14:15.073101       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 11:14:15.173241       1 shared_informer.go:320] Caches are synced for node config
	I1216 11:14:15.173368       1 shared_informer.go:320] Caches are synced for service config
	I1216 11:14:15.173395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] <==
	W1216 11:14:01.634454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 11:14:01.635696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 11:14:01.635725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 11:14:01.635755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 11:14:01.635788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 11:14:01.635817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 11:14:01.635838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 11:14:01.635918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 11:14:01.635948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 11:14:01.635967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 11:14:02.921142       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 11:19:53 addons-467441 kubelet[1518]: E1216 11:19:53.401319    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734347993401080434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:19:53 addons-467441 kubelet[1518]: E1216 11:19:53.401360    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734347993401080434,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:03 addons-467441 kubelet[1518]: E1216 11:20:03.404453    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348003404174683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:03 addons-467441 kubelet[1518]: E1216 11:20:03.404492    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348003404174683,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:13 addons-467441 kubelet[1518]: E1216 11:20:13.406818    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348013406600522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:13 addons-467441 kubelet[1518]: E1216 11:20:13.406858    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348013406600522,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:23 addons-467441 kubelet[1518]: E1216 11:20:23.409840    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348023409544085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:23 addons-467441 kubelet[1518]: E1216 11:20:23.409899    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348023409544085,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:32 addons-467441 kubelet[1518]: I1216 11:20:32.219679    1518 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="kube-system/nvidia-device-plugin-daemonset-zh27s" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 11:20:33 addons-467441 kubelet[1518]: E1216 11:20:33.412536    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348033412267483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:33 addons-467441 kubelet[1518]: E1216 11:20:33.412570    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348033412267483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:43 addons-467441 kubelet[1518]: E1216 11:20:43.414827    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348043414550527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:43 addons-467441 kubelet[1518]: E1216 11:20:43.414868    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348043414550527,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:53 addons-467441 kubelet[1518]: E1216 11:20:53.417360    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348053417092674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:53 addons-467441 kubelet[1518]: E1216 11:20:53.417399    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348053417092674,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:20:55 addons-467441 kubelet[1518]: I1216 11:20:55.219269    1518 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 11:20:57 addons-467441 kubelet[1518]: I1216 11:20:57.219104    1518 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/cloud-spanner-emulator-dc5db94f4-6fvnl" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 11:21:03 addons-467441 kubelet[1518]: E1216 11:21:03.419809    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348063419533649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:03 addons-467441 kubelet[1518]: E1216 11:21:03.419846    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348063419533649,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:13 addons-467441 kubelet[1518]: E1216 11:21:13.422125    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348073421895343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:13 addons-467441 kubelet[1518]: E1216 11:21:13.422160    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348073421895343,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:23 addons-467441 kubelet[1518]: E1216 11:21:23.424705    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348083424452322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:23 addons-467441 kubelet[1518]: E1216 11:21:23.424744    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348083424452322,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:577493,},InodesUsed:&UInt64Value{Value:219,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:21:24 addons-467441 kubelet[1518]: I1216 11:21:24.668389    1518 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=138.421814256 podStartE2EDuration="2m20.668371714s" podCreationTimestamp="2024-12-16 11:19:04 +0000 UTC" firstStartedPulling="2024-12-16 11:19:04.955354048 +0000 UTC m=+301.879093513" lastFinishedPulling="2024-12-16 11:19:07.201911515 +0000 UTC m=+304.125650971" observedRunningTime="2024-12-16 11:19:08.243981897 +0000 UTC m=+305.167721362" watchObservedRunningTime="2024-12-16 11:21:24.668371714 +0000 UTC m=+441.592111171"
	Dec 16 11:21:24 addons-467441 kubelet[1518]: I1216 11:21:24.771017    1518 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k5cf9\" (UniqueName: \"kubernetes.io/projected/f3f29727-9a3b-4636-bf7a-f0e2d0c3ec36-kube-api-access-k5cf9\") pod \"hello-world-app-55bf9c44b4-5984x\" (UID: \"f3f29727-9a3b-4636-bf7a-f0e2d0c3ec36\") " pod="default/hello-world-app-55bf9c44b4-5984x"
	
	
	==> storage-provisioner [c7daab6e1e325650b602568d5a975591b6abd08f1dd2994cac6ed61bbdfd0ad6] <==
	I1216 11:14:55.901330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 11:14:55.913941       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 11:14:55.914067       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 11:14:55.926532       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 11:14:55.926711       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153!
	I1216 11:14:55.927796       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13c55a31-277f-4961-8ce7-f8d1fb0f723f", APIVersion:"v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153 became leader
	I1216 11:14:56.027217       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-467441 -n addons-467441
helpers_test.go:261: (dbg) Run:  kubectl --context addons-467441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-56cpm ingress-nginx-admission-patch-ctlcz
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-467441 describe pod ingress-nginx-admission-create-56cpm ingress-nginx-admission-patch-ctlcz
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-467441 describe pod ingress-nginx-admission-create-56cpm ingress-nginx-admission-patch-ctlcz: exit status 1 (91.808068ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-56cpm" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-ctlcz" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-467441 describe pod ingress-nginx-admission-create-56cpm ingress-nginx-admission-patch-ctlcz: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable ingress-dns --alsologtostderr -v=1: (1.496716421s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable ingress --alsologtostderr -v=1: (7.781617013s)
--- FAIL: TestAddons/parallel/Ingress (153.32s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (349.46s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 5.145194ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-vwzrq" [702d35be-9a96-4ad2-b0dd-6e3c9ff3d4aa] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.008759695s
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (104.623721ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 3m52.13761792s

                                                
                                                
** /stderr **
I1216 11:18:02.141271 1137938 retry.go:31] will retry after 4.368153528s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (81.975032ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 3m56.588220693s

                                                
                                                
** /stderr **
I1216 11:18:06.592075 1137938 retry.go:31] will retry after 5.54232582s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (84.059638ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 4m2.215857192s

                                                
                                                
** /stderr **
I1216 11:18:12.219064 1137938 retry.go:31] will retry after 6.146009268s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (116.105514ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 4m8.479768883s

                                                
                                                
** /stderr **
I1216 11:18:18.482341 1137938 retry.go:31] will retry after 9.511783702s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (94.813975ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 4m18.086005529s

                                                
                                                
** /stderr **
I1216 11:18:28.089244 1137938 retry.go:31] will retry after 20.123730784s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (83.16325ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 4m38.295064938s

                                                
                                                
** /stderr **
I1216 11:18:48.298079 1137938 retry.go:31] will retry after 26.019300882s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (87.260303ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 5m4.402222259s

                                                
                                                
** /stderr **
I1216 11:19:14.405507 1137938 retry.go:31] will retry after 24.89844614s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (83.741696ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 5m29.385418173s

                                                
                                                
** /stderr **
I1216 11:19:39.388618 1137938 retry.go:31] will retry after 35.232821259s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (84.564104ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 6m4.706592659s

                                                
                                                
** /stderr **
I1216 11:20:14.709615 1137938 retry.go:31] will retry after 1m25.378676599s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (82.103338ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 7m30.168212209s

                                                
                                                
** /stderr **
I1216 11:21:40.171519 1137938 retry.go:31] will retry after 1m24.516393831s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (90.471417ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 8m54.777898212s

                                                
                                                
** /stderr **
I1216 11:23:04.780978 1137938 retry.go:31] will retry after 38.543635562s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-467441 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-467441 top pods -n kube-system: exit status 1 (86.45139ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-q957p, age: 9m33.408517174s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-467441
helpers_test.go:235: (dbg) docker inspect addons-467441:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1",
	        "Created": "2024-12-16T11:13:39.779228172Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1139202,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T11:13:39.92485375Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
	        "ResolvConfPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/hostname",
	        "HostsPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/hosts",
	        "LogPath": "/var/lib/docker/containers/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1/29320d75ba4293640167da1d153c817499290388bf213853a7a9d278067e14b1-json.log",
	        "Name": "/addons-467441",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-467441:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-467441",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4-init/diff:/var/lib/docker/overlay2/d13e29c6821a56996707870a44a8892ca6c52b8aaf1d7542bba33ae7dbaaadff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/13448c3639bb4a4fd8f8106417771c4d37fe2a9d6d070db7d5a42613b914bee4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-467441",
	                "Source": "/var/lib/docker/volumes/addons-467441/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-467441",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-467441",
	                "name.minikube.sigs.k8s.io": "addons-467441",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "bea63fe2b36c7b0f624b7fa9af015cee1b3760acef7ae0b98c97292912ff22aa",
	            "SandboxKey": "/var/run/docker/netns/bea63fe2b36c",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34241"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34242"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34245"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34243"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34244"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-467441": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "d6fcce2171d5d2c661d67be0fa4b0eab5ab56b6725de74e30593899084a47d1a",
	                    "EndpointID": "a04e33c4656fe0bd1d5b56a524679b1e33117dcb8806e2d199ed23958ab0a5e4",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-467441",
	                        "29320d75ba42"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-467441 -n addons-467441
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 logs -n 25: (1.440221367s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-168069 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | download-docker-168069                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-168069                                                                   | download-docker-168069 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-469402   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | binary-mirror-469402                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:43945                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-469402                                                                     | binary-mirror-469402   | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| addons  | enable dashboard -p                                                                         | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | addons-467441                                                                               |                        |         |         |                     |                     |
	| addons  | disable dashboard -p                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | addons-467441                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-467441 --wait=true                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:17 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | -p addons-467441                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-467441 ip                                                                            | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:17 UTC | 16 Dec 24 11:17 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:18 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:18 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:18 UTC | 16 Dec 24 11:19 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-467441 ssh curl -s                                                                   | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:19 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-467441 ip                                                                            | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| ssh     | addons-467441 ssh cat                                                                       | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:21 UTC |
	|         | /opt/local-path-provisioner/pvc-983a0cf1-7667-42be-95ff-08973df1d4de_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:21 UTC | 16 Dec 24 11:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-467441 addons disable                                                                | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:22 UTC | 16 Dec 24 11:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:22 UTC | 16 Dec 24 11:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-467441 addons                                                                        | addons-467441          | jenkins | v1.34.0 | 16 Dec 24 11:22 UTC | 16 Dec 24 11:22 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:13:14
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:13:14.155364 1138702 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:13:14.155570 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:14.155598 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:13:14.155617 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:14.156122 1138702 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:13:14.156636 1138702 out.go:352] Setting JSON to false
	I1216 11:13:14.157594 1138702 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28540,"bootTime":1734319055,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:13:14.157695 1138702 start.go:139] virtualization:  
	I1216 11:13:14.161349 1138702 out.go:177] * [addons-467441] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 11:13:14.164247 1138702 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:13:14.164369 1138702 notify.go:220] Checking for updates...
	I1216 11:13:14.169832 1138702 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:13:14.172663 1138702 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:13:14.175554 1138702 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:13:14.178400 1138702 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 11:13:14.181223 1138702 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:13:14.184324 1138702 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:13:14.210384 1138702 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:13:14.210510 1138702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:14.272602 1138702 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 11:13:14.263710896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:14.272736 1138702 docker.go:318] overlay module found
	I1216 11:13:14.275999 1138702 out.go:177] * Using the docker driver based on user configuration
	I1216 11:13:14.278827 1138702 start.go:297] selected driver: docker
	I1216 11:13:14.278845 1138702 start.go:901] validating driver "docker" against <nil>
	I1216 11:13:14.278857 1138702 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:13:14.279624 1138702 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:14.329342 1138702 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-16 11:13:14.320797195 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:14.329565 1138702 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:13:14.329789 1138702 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:13:14.332735 1138702 out.go:177] * Using Docker driver with root privileges
	I1216 11:13:14.335754 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:13:14.335827 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:13:14.335848 1138702 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 11:13:14.335935 1138702 start.go:340] cluster config:
	{Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:13:14.339080 1138702 out.go:177] * Starting "addons-467441" primary control-plane node in "addons-467441" cluster
	I1216 11:13:14.341906 1138702 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 11:13:14.344871 1138702 out.go:177] * Pulling base image v0.0.45-1733912881-20083 ...
	I1216 11:13:14.347726 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:14.347811 1138702 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1216 11:13:14.347822 1138702 cache.go:56] Caching tarball of preloaded images
	I1216 11:13:14.347832 1138702 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1216 11:13:14.347905 1138702 preload.go:172] Found /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1216 11:13:14.347915 1138702 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1216 11:13:14.348279 1138702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json ...
	I1216 11:13:14.348310 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json: {Name:mk0880d5bf7802bbb02fd0af2735bb69c597982f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:14.363961 1138702 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 11:13:14.364071 1138702 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1216 11:13:14.364089 1138702 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory, skipping pull
	I1216 11:13:14.364094 1138702 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 exists in cache, skipping pull
	I1216 11:13:14.364101 1138702 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1216 11:13:14.364107 1138702 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from local cache
	I1216 11:13:31.736407 1138702 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 from cached tarball
	I1216 11:13:31.736459 1138702 cache.go:194] Successfully downloaded all kic artifacts
	I1216 11:13:31.736510 1138702 start.go:360] acquireMachinesLock for addons-467441: {Name:mkb047cb330c474c9d07841e4319f52660cec1dd Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1216 11:13:31.736662 1138702 start.go:364] duration metric: took 128.546µs to acquireMachinesLock for "addons-467441"
	I1216 11:13:31.736691 1138702 start.go:93] Provisioning new machine with config: &{Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:13:31.736794 1138702 start.go:125] createHost starting for "" (driver="docker")
	I1216 11:13:31.740297 1138702 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1216 11:13:31.740565 1138702 start.go:159] libmachine.API.Create for "addons-467441" (driver="docker")
	I1216 11:13:31.740602 1138702 client.go:168] LocalClient.Create starting
	I1216 11:13:31.740721 1138702 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem
	I1216 11:13:32.903436 1138702 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem
	I1216 11:13:33.944605 1138702 cli_runner.go:164] Run: docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1216 11:13:33.960119 1138702 cli_runner.go:211] docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1216 11:13:33.960206 1138702 network_create.go:284] running [docker network inspect addons-467441] to gather additional debugging logs...
	I1216 11:13:33.960227 1138702 cli_runner.go:164] Run: docker network inspect addons-467441
	W1216 11:13:33.975828 1138702 cli_runner.go:211] docker network inspect addons-467441 returned with exit code 1
	I1216 11:13:33.975865 1138702 network_create.go:287] error running [docker network inspect addons-467441]: docker network inspect addons-467441: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-467441 not found
	I1216 11:13:33.975880 1138702 network_create.go:289] output of [docker network inspect addons-467441]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-467441 not found
	
	** /stderr **
	I1216 11:13:33.975980 1138702 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 11:13:33.992146 1138702 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40018b3080}
	I1216 11:13:33.992191 1138702 network_create.go:124] attempt to create docker network addons-467441 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1216 11:13:33.992248 1138702 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-467441 addons-467441
	I1216 11:13:34.067409 1138702 network_create.go:108] docker network addons-467441 192.168.49.0/24 created
	I1216 11:13:34.067444 1138702 kic.go:121] calculated static IP "192.168.49.2" for the "addons-467441" container
	I1216 11:13:34.067536 1138702 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1216 11:13:34.084637 1138702 cli_runner.go:164] Run: docker volume create addons-467441 --label name.minikube.sigs.k8s.io=addons-467441 --label created_by.minikube.sigs.k8s.io=true
	I1216 11:13:34.102956 1138702 oci.go:103] Successfully created a docker volume addons-467441
	I1216 11:13:34.103082 1138702 cli_runner.go:164] Run: docker run --rm --name addons-467441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --entrypoint /usr/bin/test -v addons-467441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib
	I1216 11:13:35.650028 1138702 cli_runner.go:217] Completed: docker run --rm --name addons-467441-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --entrypoint /usr/bin/test -v addons-467441:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -d /var/lib: (1.546897633s)
	I1216 11:13:35.650057 1138702 oci.go:107] Successfully prepared a docker volume addons-467441
	I1216 11:13:35.650086 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:35.650106 1138702 kic.go:194] Starting extracting preloaded images to volume ...
	I1216 11:13:35.650175 1138702 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-467441:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir
	I1216 11:13:39.713697 1138702 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-467441:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 -I lz4 -xf /preloaded.tar -C /extractDir: (4.063483647s)
	I1216 11:13:39.713732 1138702 kic.go:203] duration metric: took 4.06362355s to extract preloaded images to volume ...
	W1216 11:13:39.713879 1138702 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1216 11:13:39.713998 1138702 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1216 11:13:39.764996 1138702 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-467441 --name addons-467441 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-467441 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-467441 --network addons-467441 --ip 192.168.49.2 --volume addons-467441:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2
	I1216 11:13:40.130783 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Running}}
	I1216 11:13:40.163054 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:40.186297 1138702 cli_runner.go:164] Run: docker exec addons-467441 stat /var/lib/dpkg/alternatives/iptables
	I1216 11:13:40.235189 1138702 oci.go:144] the created container "addons-467441" has a running status.
	I1216 11:13:40.235217 1138702 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa...
	I1216 11:13:40.890711 1138702 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1216 11:13:40.924229 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:40.942928 1138702 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1216 11:13:40.942949 1138702 kic_runner.go:114] Args: [docker exec --privileged addons-467441 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1216 11:13:41.007854 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:13:41.026726 1138702 machine.go:93] provisionDockerMachine start ...
	I1216 11:13:41.026820 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.049284 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.049534 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.049543 1138702 main.go:141] libmachine: About to run SSH command:
	hostname
	I1216 11:13:41.188526 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467441
	
	I1216 11:13:41.188597 1138702 ubuntu.go:169] provisioning hostname "addons-467441"
	I1216 11:13:41.188693 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.211264 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.211525 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.211539 1138702 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-467441 && echo "addons-467441" | sudo tee /etc/hostname
	I1216 11:13:41.356815 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-467441
	
	I1216 11:13:41.356903 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.375789 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:41.376045 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:41.376069 1138702 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-467441' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-467441/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-467441' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1216 11:13:41.512784 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1216 11:13:41.512818 1138702 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20107-1132549/.minikube CaCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20107-1132549/.minikube}
	I1216 11:13:41.512849 1138702 ubuntu.go:177] setting up certificates
	I1216 11:13:41.512858 1138702 provision.go:84] configureAuth start
	I1216 11:13:41.512923 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:41.530459 1138702 provision.go:143] copyHostCerts
	I1216 11:13:41.530545 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.pem (1078 bytes)
	I1216 11:13:41.530678 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/cert.pem (1123 bytes)
	I1216 11:13:41.530742 1138702 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20107-1132549/.minikube/key.pem (1679 bytes)
	I1216 11:13:41.530801 1138702 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem org=jenkins.addons-467441 san=[127.0.0.1 192.168.49.2 addons-467441 localhost minikube]
	I1216 11:13:41.857987 1138702 provision.go:177] copyRemoteCerts
	I1216 11:13:41.858056 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1216 11:13:41.858099 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:41.878051 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:41.973963 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I1216 11:13:41.998468 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1216 11:13:42.027072 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I1216 11:13:42.052364 1138702 provision.go:87] duration metric: took 539.491064ms to configureAuth
	I1216 11:13:42.052397 1138702 ubuntu.go:193] setting minikube options for container-runtime
	I1216 11:13:42.052588 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:13:42.052706 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.071074 1138702 main.go:141] libmachine: Using SSH client type: native
	I1216 11:13:42.071363 1138702 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 34241 <nil> <nil>}
	I1216 11:13:42.071390 1138702 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1216 11:13:42.314277 1138702 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1216 11:13:42.314300 1138702 machine.go:96] duration metric: took 1.28755524s to provisionDockerMachine
	I1216 11:13:42.314311 1138702 client.go:171] duration metric: took 10.573699241s to LocalClient.Create
	I1216 11:13:42.314325 1138702 start.go:167] duration metric: took 10.573761959s to libmachine.API.Create "addons-467441"
	I1216 11:13:42.314332 1138702 start.go:293] postStartSetup for "addons-467441" (driver="docker")
	I1216 11:13:42.314344 1138702 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1216 11:13:42.314413 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1216 11:13:42.314460 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.333948 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.430859 1138702 ssh_runner.go:195] Run: cat /etc/os-release
	I1216 11:13:42.434252 1138702 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1216 11:13:42.434291 1138702 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1216 11:13:42.434303 1138702 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1216 11:13:42.434311 1138702 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1216 11:13:42.434322 1138702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-1132549/.minikube/addons for local assets ...
	I1216 11:13:42.434440 1138702 filesync.go:126] Scanning /home/jenkins/minikube-integration/20107-1132549/.minikube/files for local assets ...
	I1216 11:13:42.434479 1138702 start.go:296] duration metric: took 120.139349ms for postStartSetup
	I1216 11:13:42.434813 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:42.452587 1138702 profile.go:143] Saving config to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/config.json ...
	I1216 11:13:42.452952 1138702 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:13:42.453008 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.471315 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.561844 1138702 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1216 11:13:42.566651 1138702 start.go:128] duration metric: took 10.829838817s to createHost
	I1216 11:13:42.566682 1138702 start.go:83] releasing machines lock for "addons-467441", held for 10.830007945s
	I1216 11:13:42.566789 1138702 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-467441
	I1216 11:13:42.583497 1138702 ssh_runner.go:195] Run: cat /version.json
	I1216 11:13:42.583554 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.583808 1138702 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1216 11:13:42.583887 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:13:42.601939 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.608900 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:13:42.696102 1138702 ssh_runner.go:195] Run: systemctl --version
	I1216 11:13:42.827648 1138702 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1216 11:13:42.968154 1138702 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1216 11:13:42.972430 1138702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:13:42.993395 1138702 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1216 11:13:42.993483 1138702 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1216 11:13:43.033143 1138702 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1216 11:13:43.033209 1138702 start.go:495] detecting cgroup driver to use...
	I1216 11:13:43.033256 1138702 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1216 11:13:43.033341 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1216 11:13:43.050220 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1216 11:13:43.062437 1138702 docker.go:217] disabling cri-docker service (if available) ...
	I1216 11:13:43.062543 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1216 11:13:43.076205 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1216 11:13:43.090629 1138702 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1216 11:13:43.171417 1138702 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1216 11:13:43.269351 1138702 docker.go:233] disabling docker service ...
	I1216 11:13:43.269420 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1216 11:13:43.290166 1138702 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1216 11:13:43.302424 1138702 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1216 11:13:43.393044 1138702 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1216 11:13:43.486576 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1216 11:13:43.498264 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1216 11:13:43.514159 1138702 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1216 11:13:43.514231 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.523461 1138702 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1216 11:13:43.523581 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.533571 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.543698 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.553926 1138702 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1216 11:13:43.563206 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.572869 1138702 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.588024 1138702 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1216 11:13:43.597303 1138702 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1216 11:13:43.605557 1138702 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1216 11:13:43.613884 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:13:43.701222 1138702 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1216 11:13:43.829646 1138702 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1216 11:13:43.829906 1138702 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1216 11:13:43.834796 1138702 start.go:563] Will wait 60s for crictl version
	I1216 11:13:43.834909 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:13:43.838557 1138702 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1216 11:13:43.879735 1138702 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1216 11:13:43.879906 1138702 ssh_runner.go:195] Run: crio --version
	I1216 11:13:43.922760 1138702 ssh_runner.go:195] Run: crio --version
	I1216 11:13:43.965473 1138702 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1216 11:13:43.968338 1138702 cli_runner.go:164] Run: docker network inspect addons-467441 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1216 11:13:43.984890 1138702 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1216 11:13:43.988408 1138702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:13:43.999027 1138702 kubeadm.go:883] updating cluster {Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1216 11:13:43.999147 1138702 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1216 11:13:43.999213 1138702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:13:44.085060 1138702 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:13:44.085083 1138702 crio.go:433] Images already preloaded, skipping extraction
	I1216 11:13:44.085138 1138702 ssh_runner.go:195] Run: sudo crictl images --output json
	I1216 11:13:44.124719 1138702 crio.go:514] all images are preloaded for cri-o runtime.
	I1216 11:13:44.124744 1138702 cache_images.go:84] Images are preloaded, skipping loading
	I1216 11:13:44.124773 1138702 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1216 11:13:44.124880 1138702 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-467441 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1216 11:13:44.124982 1138702 ssh_runner.go:195] Run: crio config
	I1216 11:13:44.199284 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:13:44.199307 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:13:44.199318 1138702 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1216 11:13:44.199342 1138702 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-467441 NodeName:addons-467441 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1216 11:13:44.199479 1138702 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-467441"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1216 11:13:44.199557 1138702 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1216 11:13:44.208470 1138702 binaries.go:44] Found k8s binaries, skipping transfer
	I1216 11:13:44.208548 1138702 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1216 11:13:44.217262 1138702 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1216 11:13:44.235644 1138702 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1216 11:13:44.254591 1138702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1216 11:13:44.272884 1138702 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1216 11:13:44.276284 1138702 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1216 11:13:44.287079 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:13:44.376317 1138702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:13:44.389899 1138702 certs.go:68] Setting up /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441 for IP: 192.168.49.2
	I1216 11:13:44.389931 1138702 certs.go:194] generating shared ca certs ...
	I1216 11:13:44.389965 1138702 certs.go:226] acquiring lock for ca certs: {Name:mk010ea4b11a1a3a57224479eec9717d60444c54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.390134 1138702 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key
	I1216 11:13:44.825309 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt ...
	I1216 11:13:44.825340 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt: {Name:mke58c373925d39f5dfe073658cbfc0208df6c99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.826188 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key ...
	I1216 11:13:44.826207 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key: {Name:mka2264f04472ab6f16e0c77f2395ac6c64d531f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:44.826895 1138702 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key
	I1216 11:13:45.362285 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt ...
	I1216 11:13:45.362394 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt: {Name:mk1778f8456039df95618c7d6840b9eb924220c7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:45.362580 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key ...
	I1216 11:13:45.362596 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key: {Name:mkf8cdfd6a5cac6139ec81f20a14ef50e56d1477 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:45.363309 1138702 certs.go:256] generating profile certs ...
	I1216 11:13:45.363382 1138702 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key
	I1216 11:13:45.363408 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt with IP's: []
	I1216 11:13:46.021702 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt ...
	I1216 11:13:46.021742 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: {Name:mkbe5d1f751761ada51ffd61defa3d5bf59ca7ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:46.021970 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key ...
	I1216 11:13:46.021986 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.key: {Name:mkaa5e181cc6518cdfa1b39e3d8ed34b2e04c552 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:46.022089 1138702 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe
	I1216 11:13:46.022112 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1216 11:13:47.046993 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe ...
	I1216 11:13:47.047026 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe: {Name:mkef6cdde359c30fcaa658078332adc7b9c4f793 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.047231 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe ...
	I1216 11:13:47.047246 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe: {Name:mk3ef25461d52a94fd2ace0b91b2d5657eeac57f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.047337 1138702 certs.go:381] copying /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt.a9e908fe -> /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt
	I1216 11:13:47.047433 1138702 certs.go:385] copying /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key.a9e908fe -> /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key
	I1216 11:13:47.047493 1138702 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key
	I1216 11:13:47.047516 1138702 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt with IP's: []
	I1216 11:13:47.617258 1138702 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt ...
	I1216 11:13:47.617291 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt: {Name:mk7583908de7b7da789bffe19ba40e7022cc5497 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.618162 1138702 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key ...
	I1216 11:13:47.618185 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key: {Name:mk8aef23970592be9a0b81f8db808d51ad4c4c16 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:13:47.619007 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca-key.pem (1679 bytes)
	I1216 11:13:47.619055 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/ca.pem (1078 bytes)
	I1216 11:13:47.619092 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/cert.pem (1123 bytes)
	I1216 11:13:47.619123 1138702 certs.go:484] found cert: /home/jenkins/minikube-integration/20107-1132549/.minikube/certs/key.pem (1679 bytes)
	I1216 11:13:47.619809 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1216 11:13:47.644616 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1216 11:13:47.669646 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1216 11:13:47.693649 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I1216 11:13:47.718365 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1216 11:13:47.741899 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1216 11:13:47.765868 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1216 11:13:47.790121 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I1216 11:13:47.813935 1138702 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1216 11:13:47.838875 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1216 11:13:47.858546 1138702 ssh_runner.go:195] Run: openssl version
	I1216 11:13:47.863964 1138702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1216 11:13:47.873330 1138702 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.876606 1138702 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 16 11:13 /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.876677 1138702 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1216 11:13:47.884122 1138702 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1216 11:13:47.893782 1138702 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1216 11:13:47.897187 1138702 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1216 11:13:47.897239 1138702 kubeadm.go:392] StartCluster: {Name:addons-467441 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-467441 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:13:47.897322 1138702 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1216 11:13:47.897398 1138702 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1216 11:13:47.944721 1138702 cri.go:89] found id: ""
	I1216 11:13:47.944817 1138702 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1216 11:13:47.953814 1138702 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1216 11:13:47.962925 1138702 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1216 11:13:47.963036 1138702 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1216 11:13:47.971792 1138702 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1216 11:13:47.971814 1138702 kubeadm.go:157] found existing configuration files:
	
	I1216 11:13:47.971890 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1216 11:13:47.980741 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1216 11:13:47.980839 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1216 11:13:47.989331 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1216 11:13:47.998398 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1216 11:13:47.998517 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1216 11:13:48.008279 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1216 11:13:48.018452 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1216 11:13:48.018533 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1216 11:13:48.027827 1138702 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1216 11:13:48.037828 1138702 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1216 11:13:48.037963 1138702 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1216 11:13:48.047889 1138702 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1216 11:13:48.090451 1138702 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1216 11:13:48.090513 1138702 kubeadm.go:310] [preflight] Running pre-flight checks
	I1216 11:13:48.109689 1138702 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1216 11:13:48.109766 1138702 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1216 11:13:48.109807 1138702 kubeadm.go:310] OS: Linux
	I1216 11:13:48.109864 1138702 kubeadm.go:310] CGROUPS_CPU: enabled
	I1216 11:13:48.109917 1138702 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1216 11:13:48.109967 1138702 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1216 11:13:48.110021 1138702 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1216 11:13:48.110073 1138702 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1216 11:13:48.110128 1138702 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1216 11:13:48.110176 1138702 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1216 11:13:48.110229 1138702 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1216 11:13:48.110278 1138702 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1216 11:13:48.169671 1138702 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1216 11:13:48.169787 1138702 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1216 11:13:48.169890 1138702 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1216 11:13:48.176450 1138702 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1216 11:13:48.183351 1138702 out.go:235]   - Generating certificates and keys ...
	I1216 11:13:48.183491 1138702 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1216 11:13:48.183569 1138702 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1216 11:13:48.378862 1138702 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1216 11:13:49.300047 1138702 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1216 11:13:50.186067 1138702 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1216 11:13:50.780969 1138702 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1216 11:13:51.097828 1138702 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1216 11:13:51.097980 1138702 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-467441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 11:13:51.339457 1138702 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1216 11:13:51.339955 1138702 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-467441 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1216 11:13:51.540511 1138702 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1216 11:13:52.237262 1138702 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1216 11:13:52.697623 1138702 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1216 11:13:52.697827 1138702 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1216 11:13:52.895081 1138702 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1216 11:13:53.855936 1138702 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1216 11:13:54.042422 1138702 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1216 11:13:54.344492 1138702 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1216 11:13:54.838688 1138702 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1216 11:13:54.839279 1138702 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1216 11:13:54.842285 1138702 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1216 11:13:54.845674 1138702 out.go:235]   - Booting up control plane ...
	I1216 11:13:54.845780 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1216 11:13:54.845863 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1216 11:13:54.845940 1138702 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1216 11:13:54.855044 1138702 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1216 11:13:54.861005 1138702 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1216 11:13:54.861247 1138702 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1216 11:13:54.961366 1138702 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1216 11:13:54.961515 1138702 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1216 11:13:55.963423 1138702 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001912945s
	I1216 11:13:55.963523 1138702 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1216 11:14:02.464634 1138702 kubeadm.go:310] [api-check] The API server is healthy after 6.501456843s
	I1216 11:14:02.489573 1138702 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1216 11:14:02.505609 1138702 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1216 11:14:02.531611 1138702 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1216 11:14:02.531807 1138702 kubeadm.go:310] [mark-control-plane] Marking the node addons-467441 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1216 11:14:02.544803 1138702 kubeadm.go:310] [bootstrap-token] Using token: m5v603.9e6ugxdm6391fj1l
	I1216 11:14:02.547845 1138702 out.go:235]   - Configuring RBAC rules ...
	I1216 11:14:02.547974 1138702 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1216 11:14:02.552264 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1216 11:14:02.560678 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1216 11:14:02.569321 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1216 11:14:02.578230 1138702 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1216 11:14:02.584172 1138702 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1216 11:14:02.873125 1138702 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1216 11:14:03.317576 1138702 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1216 11:14:03.871684 1138702 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1216 11:14:03.872904 1138702 kubeadm.go:310] 
	I1216 11:14:03.872981 1138702 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1216 11:14:03.872996 1138702 kubeadm.go:310] 
	I1216 11:14:03.873074 1138702 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1216 11:14:03.873083 1138702 kubeadm.go:310] 
	I1216 11:14:03.873110 1138702 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1216 11:14:03.873172 1138702 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1216 11:14:03.873227 1138702 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1216 11:14:03.873235 1138702 kubeadm.go:310] 
	I1216 11:14:03.873290 1138702 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1216 11:14:03.873298 1138702 kubeadm.go:310] 
	I1216 11:14:03.873346 1138702 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1216 11:14:03.873354 1138702 kubeadm.go:310] 
	I1216 11:14:03.873407 1138702 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1216 11:14:03.873486 1138702 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1216 11:14:03.873565 1138702 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1216 11:14:03.873574 1138702 kubeadm.go:310] 
	I1216 11:14:03.873659 1138702 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1216 11:14:03.873738 1138702 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1216 11:14:03.873744 1138702 kubeadm.go:310] 
	I1216 11:14:03.873829 1138702 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token m5v603.9e6ugxdm6391fj1l \
	I1216 11:14:03.873941 1138702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:732496d321a5163361c7bb7221bca3ef9277db1e77b552da68d2c35a6f9c3ac6 \
	I1216 11:14:03.873967 1138702 kubeadm.go:310] 	--control-plane 
	I1216 11:14:03.873974 1138702 kubeadm.go:310] 
	I1216 11:14:03.874059 1138702 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1216 11:14:03.874068 1138702 kubeadm.go:310] 
	I1216 11:14:03.874152 1138702 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token m5v603.9e6ugxdm6391fj1l \
	I1216 11:14:03.874259 1138702 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:732496d321a5163361c7bb7221bca3ef9277db1e77b552da68d2c35a6f9c3ac6 
	I1216 11:14:03.877479 1138702 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1216 11:14:03.877599 1138702 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1216 11:14:03.877621 1138702 cni.go:84] Creating CNI manager for ""
	I1216 11:14:03.877629 1138702 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:14:03.882423 1138702 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1216 11:14:03.885317 1138702 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1216 11:14:03.888900 1138702 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1216 11:14:03.888967 1138702 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1216 11:14:03.906618 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1216 11:14:04.195666 1138702 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1216 11:14:04.195802 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:04.195899 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-467441 minikube.k8s.io/updated_at=2024_12_16T11_14_04_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8 minikube.k8s.io/name=addons-467441 minikube.k8s.io/primary=true
	I1216 11:14:04.339273 1138702 ops.go:34] apiserver oom_adj: -16
	I1216 11:14:04.339455 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:04.840077 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:05.340192 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:05.840256 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:06.339510 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:06.840205 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.339993 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.839507 1138702 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1216 11:14:07.930343 1138702 kubeadm.go:1113] duration metric: took 3.734585351s to wait for elevateKubeSystemPrivileges
	I1216 11:14:07.930370 1138702 kubeadm.go:394] duration metric: took 20.033135259s to StartCluster
	I1216 11:14:07.930387 1138702 settings.go:142] acquiring lock: {Name:mkb28b824e30aa946b7dc0b254d517c0b70b9782 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:14:07.931270 1138702 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:14:07.931695 1138702 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20107-1132549/kubeconfig: {Name:mka4860de2b5135bd0f5db65e71bb8db0bcf8bc6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1216 11:14:07.931891 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1216 11:14:07.931914 1138702 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1216 11:14:07.932155 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:14:07.932185 1138702 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1216 11:14:07.932278 1138702 addons.go:69] Setting yakd=true in profile "addons-467441"
	I1216 11:14:07.932292 1138702 addons.go:234] Setting addon yakd=true in "addons-467441"
	I1216 11:14:07.932316 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.932903 1138702 addons.go:69] Setting inspektor-gadget=true in profile "addons-467441"
	I1216 11:14:07.932920 1138702 addons.go:234] Setting addon inspektor-gadget=true in "addons-467441"
	I1216 11:14:07.932961 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.933054 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.933462 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.934380 1138702 addons.go:69] Setting metrics-server=true in profile "addons-467441"
	I1216 11:14:07.934410 1138702 addons.go:234] Setting addon metrics-server=true in "addons-467441"
	I1216 11:14:07.934462 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.935023 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.938942 1138702 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-467441"
	I1216 11:14:07.938980 1138702 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-467441"
	I1216 11:14:07.939013 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.939558 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.952076 1138702 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-467441"
	I1216 11:14:07.952152 1138702 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-467441"
	I1216 11:14:07.952205 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.952916 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.957637 1138702 addons.go:69] Setting cloud-spanner=true in profile "addons-467441"
	I1216 11:14:07.957719 1138702 addons.go:234] Setting addon cloud-spanner=true in "addons-467441"
	I1216 11:14:07.957782 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.958484 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.965451 1138702 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-467441"
	I1216 11:14:07.965581 1138702 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-467441"
	I1216 11:14:07.965640 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.966533 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.974831 1138702 addons.go:69] Setting registry=true in profile "addons-467441"
	I1216 11:14:07.974877 1138702 addons.go:234] Setting addon registry=true in "addons-467441"
	I1216 11:14:07.974916 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.975493 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.992730 1138702 addons.go:69] Setting storage-provisioner=true in profile "addons-467441"
	I1216 11:14:07.992789 1138702 addons.go:69] Setting gcp-auth=true in profile "addons-467441"
	I1216 11:14:07.992785 1138702 addons.go:234] Setting addon storage-provisioner=true in "addons-467441"
	I1216 11:14:07.992812 1138702 mustload.go:65] Loading cluster: addons-467441
	I1216 11:14:07.992842 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:07.993043 1138702 config.go:182] Loaded profile config "addons-467441": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:14:07.993338 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.993382 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:07.992730 1138702 addons.go:69] Setting default-storageclass=true in profile "addons-467441"
	I1216 11:14:08.012852 1138702 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-467441"
	I1216 11:14:08.013274 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.015589 1138702 addons.go:69] Setting ingress=true in profile "addons-467441"
	I1216 11:14:08.015623 1138702 addons.go:234] Setting addon ingress=true in "addons-467441"
	I1216 11:14:08.015681 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.016267 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.022651 1138702 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-467441"
	I1216 11:14:08.022688 1138702 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-467441"
	I1216 11:14:08.023125 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.037400 1138702 addons.go:69] Setting ingress-dns=true in profile "addons-467441"
	I1216 11:14:08.037484 1138702 addons.go:234] Setting addon ingress-dns=true in "addons-467441"
	I1216 11:14:08.037563 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.038216 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.038469 1138702 addons.go:69] Setting volcano=true in profile "addons-467441"
	I1216 11:14:08.038514 1138702 addons.go:234] Setting addon volcano=true in "addons-467441"
	I1216 11:14:08.038583 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.044887 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.057675 1138702 out.go:177] * Verifying Kubernetes components...
	I1216 11:14:08.078268 1138702 addons.go:69] Setting volumesnapshots=true in profile "addons-467441"
	I1216 11:14:08.078349 1138702 addons.go:234] Setting addon volumesnapshots=true in "addons-467441"
	I1216 11:14:08.078417 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.079059 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.100212 1138702 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1216 11:14:08.103925 1138702 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1216 11:14:08.104176 1138702 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1216 11:14:08.104327 1138702 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1216 11:14:08.104341 1138702 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1216 11:14:08.104420 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.140346 1138702 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1216 11:14:08.162833 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1216 11:14:08.162913 1138702 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1216 11:14:08.163019 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.173638 1138702 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1216 11:14:08.173883 1138702 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1216 11:14:08.175678 1138702 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-467441"
	I1216 11:14:08.175719 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.176141 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.185166 1138702 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 11:14:08.185196 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1216 11:14:08.185257 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.188652 1138702 addons.go:234] Setting addon default-storageclass=true in "addons-467441"
	I1216 11:14:08.188689 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.189196 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:08.200220 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1216 11:14:08.202864 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	W1216 11:14:08.203073 1138702 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1216 11:14:08.203234 1138702 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1216 11:14:08.216599 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1216 11:14:08.217777 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:08.216845 1138702 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 11:14:08.225313 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1216 11:14:08.225397 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.225784 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1216 11:14:08.226083 1138702 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 11:14:08.226096 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1216 11:14:08.226145 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.216855 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1216 11:14:08.237325 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1216 11:14:08.237356 1138702 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1216 11:14:08.237428 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.237869 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1216 11:14:08.238154 1138702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:14:08.238167 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1216 11:14:08.238216 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.257180 1138702 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1216 11:14:08.257205 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1216 11:14:08.257268 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.266751 1138702 out.go:177]   - Using image docker.io/registry:2.8.3
	I1216 11:14:08.270797 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1216 11:14:08.270825 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1216 11:14:08.270895 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.300098 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1216 11:14:08.301032 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1216 11:14:08.301543 1138702 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1216 11:14:08.301622 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.301279 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.319412 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:08.321322 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1216 11:14:08.323869 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.325468 1138702 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1216 11:14:08.331750 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:08.334667 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1216 11:14:08.335033 1138702 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 11:14:08.335050 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1216 11:14:08.335120 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.335268 1138702 out.go:177]   - Using image docker.io/busybox:stable
	I1216 11:14:08.357358 1138702 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 11:14:08.357430 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1216 11:14:08.357518 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.379856 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1216 11:14:08.388816 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1216 11:14:08.395531 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1216 11:14:08.400574 1138702 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1216 11:14:08.402758 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.407112 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1216 11:14:08.407136 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1216 11:14:08.407276 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.408507 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.456879 1138702 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1216 11:14:08.456910 1138702 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1216 11:14:08.456980 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:08.465230 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.504865 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.512026 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.513662 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.521397 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.538441 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.542778 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.554380 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.564863 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.583078 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:08.686112 1138702 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1216 11:14:08.686251 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1216 11:14:08.744840 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1216 11:14:08.744862 1138702 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1216 11:14:08.749105 1138702 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1216 11:14:08.749129 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1216 11:14:08.850788 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1216 11:14:08.876207 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1216 11:14:08.885603 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1216 11:14:08.885628 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1216 11:14:08.903004 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1216 11:14:08.935617 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1216 11:14:08.938881 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1216 11:14:08.959698 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1216 11:14:08.959723 1138702 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1216 11:14:08.963296 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1216 11:14:08.971173 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1216 11:14:08.973111 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1216 11:14:09.007253 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1216 11:14:09.007281 1138702 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1216 11:14:09.012005 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1216 11:14:09.012966 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1216 11:14:09.012991 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1216 11:14:09.028941 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1216 11:14:09.028969 1138702 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1216 11:14:09.088256 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1216 11:14:09.088288 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1216 11:14:09.123855 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1216 11:14:09.123881 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1216 11:14:09.127798 1138702 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1216 11:14:09.127823 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1216 11:14:09.152282 1138702 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:14:09.152306 1138702 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1216 11:14:09.242685 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1216 11:14:09.242710 1138702 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1216 11:14:09.283462 1138702 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1216 11:14:09.283488 1138702 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1216 11:14:09.335015 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1216 11:14:09.338289 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1216 11:14:09.378205 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1216 11:14:09.378234 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1216 11:14:09.444630 1138702 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1216 11:14:09.444653 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1216 11:14:09.451120 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1216 11:14:09.451145 1138702 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1216 11:14:09.594916 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1216 11:14:09.594941 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1216 11:14:09.678034 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1216 11:14:09.683862 1138702 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:09.683891 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1216 11:14:09.778471 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1216 11:14:09.778499 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1216 11:14:09.803992 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:09.880767 1138702 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1216 11:14:09.880792 1138702 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1216 11:14:09.956537 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1216 11:14:09.956562 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1216 11:14:10.018376 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1216 11:14:10.018403 1138702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1216 11:14:10.062590 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1216 11:14:10.062619 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1216 11:14:10.085609 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1216 11:14:10.085635 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1216 11:14:10.110565 1138702 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 11:14:10.110592 1138702 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1216 11:14:10.171033 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1216 11:14:11.484788 1138702 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (2.798492057s)
	I1216 11:14:11.484820 1138702 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1216 11:14:11.484877 1138702 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (2.798694202s)
	I1216 11:14:11.485748 1138702 node_ready.go:35] waiting up to 6m0s for node "addons-467441" to be "Ready" ...
	I1216 11:14:11.486783 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (2.635968876s)
	I1216 11:14:12.285434 1138702 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-467441" context rescaled to 1 replicas
	I1216 11:14:13.567537 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:13.914143 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (5.037894583s)
	I1216 11:14:13.914256 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (5.011226541s)
	I1216 11:14:13.914324 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.978684161s)
	I1216 11:14:13.914384 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.975482297s)
	I1216 11:14:14.240274 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (5.276941975s)
	I1216 11:14:14.240534 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (5.269336931s)
	W1216 11:14:14.383405 1138702 out.go:270] ! Enabling 'default-storageclass' returned an error: running callbacks: [Error making standard the default storage class: Error while marking storage class local-path as non-default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1216 11:14:14.510566 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.537418295s)
	I1216 11:14:15.282762 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.947705237s)
	I1216 11:14:15.282836 1138702 addons.go:475] Verifying addon registry=true in "addons-467441"
	I1216 11:14:15.282971 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (6.270939336s)
	I1216 11:14:15.283015 1138702 addons.go:475] Verifying addon ingress=true in "addons-467441"
	I1216 11:14:15.283396 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.945076256s)
	I1216 11:14:15.283415 1138702 addons.go:475] Verifying addon metrics-server=true in "addons-467441"
	I1216 11:14:15.283465 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.605405433s)
	I1216 11:14:15.283826 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.47979848s)
	W1216 11:14:15.283992 1138702 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 11:14:15.284017 1138702 retry.go:31] will retry after 186.02248ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1216 11:14:15.285935 1138702 out.go:177] * Verifying registry addon...
	I1216 11:14:15.285940 1138702 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-467441 service yakd-dashboard -n yakd-dashboard
	
	I1216 11:14:15.285959 1138702 out.go:177] * Verifying ingress addon...
	I1216 11:14:15.289916 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1216 11:14:15.290865 1138702 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1216 11:14:15.297822 1138702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 11:14:15.297862 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:15.299007 1138702 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1216 11:14:15.299030 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:15.470435 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1216 11:14:15.524625 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.35353729s)
	I1216 11:14:15.524660 1138702 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-467441"
	I1216 11:14:15.527837 1138702 out.go:177] * Verifying csi-hostpath-driver addon...
	I1216 11:14:15.531575 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1216 11:14:15.544943 1138702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 11:14:15.544968 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:15.794949 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:15.796835 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:15.989506 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:16.036296 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:16.293827 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:16.298528 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:16.535741 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:16.794219 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:16.795700 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.037856 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:17.294981 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:17.295796 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.535576 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:17.794463 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:17.795638 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:17.989594 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:18.036334 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:18.218078 1138702 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (2.747595273s)
	I1216 11:14:18.294735 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:18.295902 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:18.456908 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1216 11:14:18.457012 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:18.473998 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:18.537544 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:18.582238 1138702 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1216 11:14:18.601280 1138702 addons.go:234] Setting addon gcp-auth=true in "addons-467441"
	I1216 11:14:18.601381 1138702 host.go:66] Checking if "addons-467441" exists ...
	I1216 11:14:18.601887 1138702 cli_runner.go:164] Run: docker container inspect addons-467441 --format={{.State.Status}}
	I1216 11:14:18.620078 1138702 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1216 11:14:18.620137 1138702 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-467441
	I1216 11:14:18.637921 1138702 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34241 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/addons-467441/id_rsa Username:docker}
	I1216 11:14:18.751762 1138702 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1216 11:14:18.754744 1138702 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1216 11:14:18.757511 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1216 11:14:18.757539 1138702 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1216 11:14:18.776541 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1216 11:14:18.776565 1138702 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1216 11:14:18.794290 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:18.795692 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:18.797811 1138702 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 11:14:18.797833 1138702 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1216 11:14:18.816248 1138702 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1216 11:14:19.035615 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:19.310075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:19.311054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:19.370119 1138702 addons.go:475] Verifying addon gcp-auth=true in "addons-467441"
	I1216 11:14:19.373404 1138702 out.go:177] * Verifying gcp-auth addon...
	I1216 11:14:19.377256 1138702 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1216 11:14:19.408651 1138702 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1216 11:14:19.408676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:19.536190 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:19.794385 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:19.794888 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:19.880997 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:19.990098 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:20.035742 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:20.293122 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:20.295275 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:20.380577 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:20.535549 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:20.793310 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:20.795442 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:20.880779 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:21.035722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:21.293868 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:21.295689 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:21.381073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:21.535172 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:21.793702 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:21.794391 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:21.881359 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:22.035502 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:22.295239 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:22.296274 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:22.395316 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:22.489159 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:22.535906 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:22.794639 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:22.794840 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:22.880804 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:23.035047 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:23.294478 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:23.295294 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:23.380576 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:23.535745 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:23.798956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:23.799446 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:23.881205 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:24.035898 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:24.294139 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:24.295074 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:24.380932 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:24.489638 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:24.535566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:24.793420 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:24.795512 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:24.880891 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:25.035562 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:25.294712 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:25.295470 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:25.381080 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:25.536051 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:25.794486 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:25.795408 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:25.881019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.035571 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:26.293744 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:26.295029 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:26.381003 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.535379 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:26.793291 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:26.795588 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:26.881158 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:26.989524 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:27.035329 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:27.294569 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:27.295114 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:27.381547 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:27.535702 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:27.793317 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:27.795325 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:27.891640 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.035637 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:28.293796 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:28.294865 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:28.381229 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.535506 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:28.794563 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:28.795597 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:28.881109 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:28.989920 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:29.035964 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:29.294498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:29.295652 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:29.380981 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:29.535402 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:29.793987 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:29.795358 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:29.880554 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:30.037199 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:30.294662 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:30.295610 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:30.381042 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:30.536332 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:30.794138 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:30.795097 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:30.880207 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:31.035991 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:31.294055 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:31.295197 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:31.380834 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:31.489210 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:31.535645 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:31.792929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:31.794668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:31.881238 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:32.036012 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:32.294913 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:32.295799 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:32.395671 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:32.536292 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:32.794974 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:32.795182 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:32.881175 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:33.035549 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:33.294979 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:33.295569 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:33.381242 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:33.489586 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:33.535671 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:33.793560 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:33.794893 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:33.881064 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:34.036489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:34.294625 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:34.295526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:34.381093 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:34.534936 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:34.795852 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:34.796056 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:34.881073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.035676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:35.294697 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:35.296112 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:35.380477 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.535097 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:35.794050 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:35.794620 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:35.881356 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:35.989499 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:36.034970 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:36.294285 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:36.295526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:36.381334 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:36.535407 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:36.795018 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:36.795249 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:36.880449 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.035221 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:37.293895 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:37.294849 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:37.380901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.534998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:37.793286 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:37.794974 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:37.881110 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:37.991490 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:38.035128 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:38.293624 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:38.294809 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:38.381355 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:38.535722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:38.792783 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:38.794569 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:38.880605 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:39.035180 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:39.293986 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:39.294442 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:39.380909 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:39.535108 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:39.794331 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:39.795054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:39.881050 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:40.036169 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:40.294483 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:40.295218 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:40.380498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:40.489301 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:40.535664 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:40.793458 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:40.795165 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:40.881187 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:41.035625 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:41.292951 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:41.294218 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:41.380098 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:41.535041 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:41.794132 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:41.794988 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:41.880324 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.035861 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:42.293273 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:42.294386 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:42.381844 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.535945 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:42.794968 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:42.795150 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:42.881058 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:42.989103 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:43.035309 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:43.293934 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:43.295712 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:43.380994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:43.534897 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:43.793218 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:43.794546 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:43.880740 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.035870 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:44.294145 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:44.295120 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:44.380612 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.535498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:44.793501 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:44.795240 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:44.880341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:44.989906 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:45.038676 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:45.293501 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:45.295624 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:45.381084 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:45.535038 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:45.793611 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:45.795196 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:45.883445 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:46.035179 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:46.294480 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:46.294740 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:46.380863 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:46.535033 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:46.793541 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:46.796252 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:46.880466 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:47.035019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:47.294172 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:47.295203 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:47.380622 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:47.488823 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:47.535917 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:47.792839 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:47.794582 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:47.880791 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:48.035593 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:48.293652 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:48.295457 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:48.380634 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:48.534797 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:48.792824 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:48.794537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:48.880689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:49.035679 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:49.292998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:49.295152 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:49.380433 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:49.489580 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:49.535043 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:49.794767 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:49.795558 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:49.880647 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:50.035422 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:50.294416 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:50.295001 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:50.380437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:50.535771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:50.793004 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:50.794062 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:50.880924 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.034973 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:51.293293 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:51.295250 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:51.380529 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.535004 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:51.793910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:51.794966 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:51.881121 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:51.989330 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:52.035670 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:52.292865 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:52.294985 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:52.381442 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:52.535429 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:52.794297 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:52.795169 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:52.880113 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:53.035297 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:53.294488 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:53.294919 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:53.380924 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:53.535413 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:53.794655 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:53.795713 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:53.880829 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.035516 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:54.294075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:54.295060 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:54.381317 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.489575 1138702 node_ready.go:53] node "addons-467441" has status "Ready":"False"
	I1216 11:14:54.535133 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:54.793573 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:54.795801 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:54.880882 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:54.998049 1138702 node_ready.go:49] node "addons-467441" has status "Ready":"True"
	I1216 11:14:54.998086 1138702 node_ready.go:38] duration metric: took 43.512303411s for node "addons-467441" to be "Ready" ...
	I1216 11:14:54.998098 1138702 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:14:55.027451 1138702 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:55.054813 1138702 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1216 11:14:55.054840 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:55.331317 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:55.334471 1138702 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1216 11:14:55.334497 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:55.382339 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:55.539207 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:55.803261 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:55.804552 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:55.899919 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:56.037804 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:56.296668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:56.298057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:56.395666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:56.537030 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:56.825095 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:56.835307 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:56.881722 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.034782 1138702 pod_ready.go:103] pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace has status "Ready":"False"
	I1216 11:14:57.046598 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:57.295426 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:57.296501 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:57.381689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.574251 1138702 pod_ready.go:93] pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.574321 1138702 pod_ready.go:82] duration metric: took 2.546830061s for pod "coredns-7c65d6cfc9-q957p" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.574356 1138702 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.578878 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:57.595107 1138702 pod_ready.go:93] pod "etcd-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.595181 1138702 pod_ready.go:82] duration metric: took 20.803647ms for pod "etcd-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.595214 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.602065 1138702 pod_ready.go:93] pod "kube-apiserver-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.602142 1138702 pod_ready.go:82] duration metric: took 6.889222ms for pod "kube-apiserver-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.602169 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.608217 1138702 pod_ready.go:93] pod "kube-controller-manager-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.608293 1138702 pod_ready.go:82] duration metric: took 6.10164ms for pod "kube-controller-manager-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.608324 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-pss99" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.614680 1138702 pod_ready.go:93] pod "kube-proxy-pss99" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.614758 1138702 pod_ready.go:82] duration metric: took 6.413132ms for pod "kube-proxy-pss99" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.614786 1138702 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.797116 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:57.799256 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:57.881646 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:57.933329 1138702 pod_ready.go:93] pod "kube-scheduler-addons-467441" in "kube-system" namespace has status "Ready":"True"
	I1216 11:14:57.933450 1138702 pod_ready.go:82] duration metric: took 318.643109ms for pod "kube-scheduler-addons-467441" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:57.933498 1138702 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace to be "Ready" ...
	I1216 11:14:58.039890 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:58.300828 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:58.302807 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:58.382630 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:58.539956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:58.799094 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:58.800099 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:58.883296 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.037746 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:59.314933 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:59.315512 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:59.381597 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.538657 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:14:59.794074 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:14:59.799435 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:14:59.880889 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:14:59.941116 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:00.038910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:00.310075 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:00.311192 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:00.382171 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:00.536255 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:00.796247 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:00.796912 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:00.883439 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.035921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:01.294863 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:01.297997 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:01.381750 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.538002 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:01.796568 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:01.797950 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:01.882260 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:01.944474 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:02.041425 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:02.293900 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:02.298875 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:02.386246 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:02.536475 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:02.797452 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:02.797831 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:02.882881 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.038405 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:03.295423 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:03.309405 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:03.381909 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.537428 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:03.793713 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:03.797385 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:03.880929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:03.951510 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:04.042925 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:04.296038 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:04.301152 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:04.382044 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:04.537630 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:04.796970 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:04.800405 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:04.889437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:05.036392 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:05.296972 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:05.298565 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:05.380980 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:05.538881 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:05.796187 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:05.796487 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:05.880524 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:06.036629 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:06.297309 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:06.298378 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:06.397394 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:06.440330 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:06.538520 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:06.796783 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:06.798272 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:06.880452 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:07.036689 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:07.294718 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:07.297253 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:07.385910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:07.537417 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:07.794254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:07.796945 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:07.881170 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:08.036892 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:08.293935 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:08.296386 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:08.381277 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:08.440684 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:08.536921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:08.795833 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:08.796434 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:08.881151 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:09.036312 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:09.299063 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:09.301629 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:09.381045 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:09.537566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:09.795200 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:09.797904 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:09.881188 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.038640 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:10.304665 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:10.305125 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:10.381907 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.536285 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:10.797196 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:10.798912 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:10.883387 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:10.948823 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:11.037849 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:11.298301 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:11.299797 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:11.381024 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:11.539937 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:11.797315 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:11.797940 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:11.880535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:12.036972 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:12.295913 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:12.296967 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:12.396235 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:12.536818 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:12.795496 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:12.796664 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:12.881982 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:13.038513 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:13.298608 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:13.307439 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:13.387300 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:13.442943 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:13.538421 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:13.796007 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:13.797402 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:13.880901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:14.040489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:14.294073 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:14.296744 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:14.381604 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:14.536580 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:14.795409 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:14.796373 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:14.880647 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:15.037371 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:15.294093 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:15.296224 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:15.380739 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:15.447680 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:15.536586 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:15.796905 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:15.798355 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:15.881200 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:16.036889 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:16.294964 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:16.297056 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:16.395838 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:16.537198 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:16.795481 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:16.796463 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:16.880955 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.036254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:17.294180 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:17.297726 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:17.381609 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.537437 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:17.804581 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:17.805813 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:17.881308 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:17.943557 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:18.037911 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:18.295370 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:18.296070 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:18.380745 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:18.537126 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:18.797201 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:18.797885 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:18.880463 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:19.036838 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:19.295960 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:19.296808 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:19.381987 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:19.538974 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:19.795392 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:19.797231 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:19.881950 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:20.038770 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:20.305409 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1216 11:15:20.306690 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:20.404355 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:20.440658 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:20.536936 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:20.795867 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:20.796475 1138702 kapi.go:107] duration metric: took 1m5.506563678s to wait for kubernetes.io/minikube-addons=registry ...
	I1216 11:15:20.881316 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:21.037341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:21.295356 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:21.380626 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:21.537240 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:21.795539 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:21.881166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:22.037784 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:22.296637 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:22.397577 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:22.441473 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:22.537018 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:22.798272 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:22.887057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:23.036551 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:23.303882 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:23.382374 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:23.537681 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:23.798330 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:23.881199 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.038454 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:24.297739 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:24.382657 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.539902 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:24.796464 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:24.882348 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:24.943923 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:25.038363 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:25.295488 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:25.381177 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:25.536658 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:25.795497 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:25.880809 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:26.038107 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:26.295980 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:26.381314 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:26.537216 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:26.796176 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:26.881680 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:27.039085 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:27.298101 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:27.389214 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:27.440482 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:27.537818 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:27.797657 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:27.882323 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:28.037208 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:28.297212 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:28.381559 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:28.539771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:28.797628 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:28.881532 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.037383 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:29.296727 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:29.396934 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.537289 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:29.796179 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:29.881354 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:29.939953 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:30.037726 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:30.296277 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:30.380354 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:30.537797 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:30.796093 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:30.881340 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.037009 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:31.296087 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:31.381261 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.539140 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:31.796298 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:31.880918 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:31.940776 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:32.037341 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:32.296249 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:32.396091 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:32.537166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:32.795935 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:32.880993 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:33.037356 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:33.295590 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:33.383268 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:33.537666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:33.796370 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:33.881696 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:34.037760 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:34.301645 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:34.382899 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:34.442676 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:34.536880 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:34.795920 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:34.881737 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:35.040083 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:35.296668 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:35.382143 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:35.537488 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:35.796514 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:35.881777 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.038231 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:36.298485 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:36.380998 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.538956 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:36.818546 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:36.881326 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:36.940527 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:37.039621 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:37.296260 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:37.382601 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:37.537550 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:37.799509 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:37.880826 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.036605 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:38.294651 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:38.381165 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.538476 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:38.796949 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:38.881812 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:38.943956 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:39.036663 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:39.295566 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:39.381377 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:39.540728 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:39.796177 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:39.899467 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.045219 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:40.304233 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:40.420699 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.541400 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:40.800182 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:40.883853 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:40.946615 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:41.037064 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:41.296509 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:41.382377 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:41.538489 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:41.796948 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:41.896520 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:42.039921 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:42.315721 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:42.389401 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:42.545032 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:42.797238 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:42.881973 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:43.039477 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:43.302110 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:43.403197 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:43.441852 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:43.536891 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:43.795382 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:43.881254 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:44.038323 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:44.295397 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:44.381226 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:44.536995 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:44.795394 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:44.881551 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.038206 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:45.298999 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:45.382929 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.538103 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:45.796096 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:45.882138 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:45.943877 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:46.038115 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:46.295573 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:46.386901 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:46.538304 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:46.796136 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:46.881584 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:47.039273 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:47.295650 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:47.381255 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:47.536498 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:47.795760 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:47.886988 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:48.037771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:48.296701 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:48.396453 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:48.440408 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:48.536523 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:48.796032 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:48.890028 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:49.039589 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:49.295604 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:49.381123 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:49.536002 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:49.795163 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:49.884910 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.037679 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:50.296267 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:50.383535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.539994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:50.796127 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:50.883077 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:50.958697 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:51.042557 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:51.299537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:51.380847 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:51.538097 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:51.796179 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:51.880507 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:52.073481 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:52.296722 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:52.383232 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:52.537566 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:52.797809 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:52.884834 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:53.038155 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:53.298163 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:53.381850 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:53.449419 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:53.537944 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:53.805530 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:53.880944 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:54.037795 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:54.296293 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:54.382362 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:54.541690 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:54.798720 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:54.881115 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.037146 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:55.295849 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:55.380984 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.536320 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:55.794966 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:55.881707 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:55.946868 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:56.037795 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:56.296054 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:56.381134 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:56.545292 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:56.795504 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:56.882084 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:57.037156 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:57.295110 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:57.384092 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:57.537386 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:57.795121 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:57.881311 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:58.036681 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:58.295691 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:58.382019 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:58.442145 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:15:58.537476 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:58.796804 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:58.895957 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:59.036195 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:59.295700 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:59.381179 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:15:59.536994 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:15:59.796478 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:15:59.881730 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:00.054607 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:00.303526 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:00.387806 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:00.449927 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:00.537429 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:00.796367 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:00.880558 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:01.037768 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:01.296220 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:01.383390 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:01.547166 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:01.796153 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:01.886236 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.037623 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:02.295740 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:02.381074 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.536250 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:02.795404 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:02.880524 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:02.940258 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:03.037092 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:03.296633 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:03.381583 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:03.537464 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:03.796971 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:03.882659 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.037535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:04.296728 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:04.381666 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.543161 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:04.795831 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:04.880877 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:04.941799 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:05.044802 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:05.296456 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:05.395456 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1216 11:16:05.537058 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:05.795554 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:05.881194 1138702 kapi.go:107] duration metric: took 1m46.503939327s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1216 11:16:05.884209 1138702 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-467441 cluster.
	I1216 11:16:05.887009 1138702 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1216 11:16:05.889899 1138702 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1216 11:16:06.040710 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:06.295656 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:06.538627 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:06.797356 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:07.036771 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:07.295551 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:07.439542 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:07.536560 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:07.795950 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:08.036131 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:08.296513 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:08.538057 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:08.796312 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:09.036726 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:09.295208 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:09.441323 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:09.541260 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:09.796537 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:10.038183 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:10.295618 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:10.541755 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:10.796300 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.037535 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:11.296846 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.539268 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:11.798839 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:11.941747 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:12.038056 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:12.296583 1138702 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1216 11:16:12.537623 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:12.795704 1138702 kapi.go:107] duration metric: took 1m57.504832911s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1216 11:16:13.038216 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:13.545692 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:13.942230 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:14.038673 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:14.536643 1138702 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1216 11:16:15.037285 1138702 kapi.go:107] duration metric: took 1m59.505703722s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1216 11:16:15.040657 1138702 out.go:177] * Enabled addons: amd-gpu-device-plugin, storage-provisioner, ingress-dns, cloud-spanner, nvidia-device-plugin, storage-provisioner-rancher, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, gcp-auth, ingress, csi-hostpath-driver
	I1216 11:16:15.043733 1138702 addons.go:510] duration metric: took 2m7.111533733s for enable addons: enabled=[amd-gpu-device-plugin storage-provisioner ingress-dns cloud-spanner nvidia-device-plugin storage-provisioner-rancher inspektor-gadget metrics-server yakd volumesnapshots registry gcp-auth ingress csi-hostpath-driver]
	I1216 11:16:16.440345 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:18.440594 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:20.940416 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:23.439645 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:25.440268 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:27.940054 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:29.940352 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:31.943464 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:34.439754 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:36.940804 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:39.439610 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:41.940885 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:44.440459 1138702 pod_ready.go:103] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"False"
	I1216 11:16:45.939607 1138702 pod_ready.go:93] pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace has status "Ready":"True"
	I1216 11:16:45.939637 1138702 pod_ready.go:82] duration metric: took 1m48.006110092s for pod "metrics-server-84c5f94fbc-vwzrq" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.939653 1138702 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.945201 1138702 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace has status "Ready":"True"
	I1216 11:16:45.945226 1138702 pod_ready.go:82] duration metric: took 5.565595ms for pod "nvidia-device-plugin-daemonset-zh27s" in "kube-system" namespace to be "Ready" ...
	I1216 11:16:45.945268 1138702 pod_ready.go:39] duration metric: took 1m50.947157506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1216 11:16:45.945291 1138702 api_server.go:52] waiting for apiserver process to appear ...
	I1216 11:16:45.945322 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:16:45.945412 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:16:46.001172 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:46.001211 1138702 cri.go:89] found id: ""
	I1216 11:16:46.001220 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:16:46.001279 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.008141 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:16:46.008223 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:16:46.052702 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:46.052725 1138702 cri.go:89] found id: ""
	I1216 11:16:46.052738 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:16:46.052852 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.056246 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:16:46.056325 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:16:46.097539 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:46.097565 1138702 cri.go:89] found id: ""
	I1216 11:16:46.097573 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:16:46.097632 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.101329 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:16:46.101403 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:16:46.141737 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:46.141761 1138702 cri.go:89] found id: ""
	I1216 11:16:46.141770 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:16:46.141847 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.145455 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:16:46.145559 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:16:46.183461 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:46.183481 1138702 cri.go:89] found id: ""
	I1216 11:16:46.183489 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:16:46.183544 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.187103 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:16:46.187180 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:16:46.227346 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:46.227419 1138702 cri.go:89] found id: ""
	I1216 11:16:46.227441 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:16:46.227533 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.231115 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:16:46.231191 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:16:46.271789 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:46.271814 1138702 cri.go:89] found id: ""
	I1216 11:16:46.271823 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:16:46.271884 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:46.275339 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:16:46.275363 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:46.345234 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:16:46.345265 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:46.391849 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:16:46.391879 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:46.429861 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:16:46.429892 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:16:46.520822 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:16:46.520863 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:16:46.602564 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603029 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.603289 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603541 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.603740 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.603923 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:46.604168 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:46.604396 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:46.650517 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:16:46.650558 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:16:46.672279 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:16:46.672315 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:16:46.870402 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:16:46.870434 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:46.934396 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:16:46.934475 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:46.982588 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:16:46.982620 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:47.074457 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:16:47.074491 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:47.119447 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:16:47.119477 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:16:47.180622 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:47.180650 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:16:47.180709 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:16:47.180724 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:47.180737 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:47.180817 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:47.180828 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:47.180834 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:47.181006 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:47.181016 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:16:57.181906 1138702 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:16:57.195943 1138702 api_server.go:72] duration metric: took 2m49.263992573s to wait for apiserver process to appear ...
	I1216 11:16:57.195970 1138702 api_server.go:88] waiting for apiserver healthz status ...
	I1216 11:16:57.196006 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:16:57.196067 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:16:57.236415 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:57.236445 1138702 cri.go:89] found id: ""
	I1216 11:16:57.236453 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:16:57.236509 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.240157 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:16:57.240233 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:16:57.281957 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:57.281980 1138702 cri.go:89] found id: ""
	I1216 11:16:57.281988 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:16:57.282045 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.285452 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:16:57.285526 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:16:57.324812 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:57.324833 1138702 cri.go:89] found id: ""
	I1216 11:16:57.324842 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:16:57.324903 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.328554 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:16:57.328649 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:16:57.367780 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:57.367804 1138702 cri.go:89] found id: ""
	I1216 11:16:57.367812 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:16:57.367873 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.372459 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:16:57.372532 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:16:57.416074 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:57.416098 1138702 cri.go:89] found id: ""
	I1216 11:16:57.416106 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:16:57.416163 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.420351 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:16:57.420428 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:16:57.460422 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:57.460442 1138702 cri.go:89] found id: ""
	I1216 11:16:57.460450 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:16:57.460506 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.464240 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:16:57.464317 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:16:57.509878 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:57.509903 1138702 cri.go:89] found id: ""
	I1216 11:16:57.509927 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:16:57.509990 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:16:57.514082 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:16:57.514148 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:16:57.569358 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:16:57.569392 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:16:57.617759 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:16:57.617798 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:16:57.700328 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.700603 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.700803 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701034 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.701213 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701382 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:57.701596 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:57.701820 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:57.743823 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:16:57.743856 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:16:57.760967 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:16:57.760998 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:16:57.905525 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:16:57.905557 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:16:57.994161 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:16:57.994201 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:16:58.040425 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:16:58.040455 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:16:58.137749 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:16:58.137788 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:16:58.201503 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:16:58.201535 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:16:58.259240 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:16:58.259280 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:16:58.305035 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:16:58.305064 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:16:58.342723 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:58.342749 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:16:58.342800 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:16:58.342816 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:58.342823 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:16:58.342833 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:16:58.342843 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:16:58.342851 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:16:58.342860 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:16:58.342867 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:17:08.343706 1138702 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1216 11:17:08.353682 1138702 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1216 11:17:08.354760 1138702 api_server.go:141] control plane version: v1.31.2
	I1216 11:17:08.354788 1138702 api_server.go:131] duration metric: took 11.158809876s to wait for apiserver health ...
	I1216 11:17:08.354797 1138702 system_pods.go:43] waiting for kube-system pods to appear ...
	I1216 11:17:08.354818 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1216 11:17:08.354886 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1216 11:17:08.401145 1138702 cri.go:89] found id: "ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:17:08.401166 1138702 cri.go:89] found id: ""
	I1216 11:17:08.401175 1138702 logs.go:282] 1 containers: [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48]
	I1216 11:17:08.401232 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.405646 1138702 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1216 11:17:08.405771 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1216 11:17:08.446571 1138702 cri.go:89] found id: "be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:17:08.446606 1138702 cri.go:89] found id: ""
	I1216 11:17:08.446615 1138702 logs.go:282] 1 containers: [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2]
	I1216 11:17:08.446689 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.450258 1138702 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1216 11:17:08.450339 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1216 11:17:08.487771 1138702 cri.go:89] found id: "dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:17:08.487796 1138702 cri.go:89] found id: ""
	I1216 11:17:08.487805 1138702 logs.go:282] 1 containers: [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e]
	I1216 11:17:08.487863 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.493160 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1216 11:17:08.493244 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1216 11:17:08.535149 1138702 cri.go:89] found id: "1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:17:08.535184 1138702 cri.go:89] found id: ""
	I1216 11:17:08.535193 1138702 logs.go:282] 1 containers: [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6]
	I1216 11:17:08.535275 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.539036 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1216 11:17:08.539113 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1216 11:17:08.579518 1138702 cri.go:89] found id: "16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:17:08.579541 1138702 cri.go:89] found id: ""
	I1216 11:17:08.579552 1138702 logs.go:282] 1 containers: [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184]
	I1216 11:17:08.579609 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.583275 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1216 11:17:08.583352 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1216 11:17:08.634675 1138702 cri.go:89] found id: "2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:17:08.634697 1138702 cri.go:89] found id: ""
	I1216 11:17:08.634706 1138702 logs.go:282] 1 containers: [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b]
	I1216 11:17:08.634781 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.638226 1138702 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1216 11:17:08.638296 1138702 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1216 11:17:08.688309 1138702 cri.go:89] found id: "7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:17:08.688332 1138702 cri.go:89] found id: ""
	I1216 11:17:08.688341 1138702 logs.go:282] 1 containers: [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180]
	I1216 11:17:08.688403 1138702 ssh_runner.go:195] Run: which crictl
	I1216 11:17:08.692015 1138702 logs.go:123] Gathering logs for kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] ...
	I1216 11:17:08.692042 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b"
	I1216 11:17:08.791592 1138702 logs.go:123] Gathering logs for CRI-O ...
	I1216 11:17:08.791628 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1216 11:17:08.914178 1138702 logs.go:123] Gathering logs for kubelet ...
	I1216 11:17:08.914254 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1216 11:17:08.999128 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956195    1518 reflector.go:561] object-"ingress-nginx"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:17:08.999406 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956248    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:08.999628 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.956302    1518 reflector.go:561] object-"ingress-nginx"/"ingress-nginx-admission": failed to list *v1.Secret: secrets "ingress-nginx-admission" is forbidden: User "system:node:addons-467441" cannot list resource "secrets" in API group "" in the namespace "ingress-nginx": no relationship found between node 'addons-467441' and this object
	W1216 11:17:08.999859 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.000041 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.000218 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.000428 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.000651 1138702 logs.go:138] Found kubelet problem: Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:17:09.043650 1138702 logs.go:123] Gathering logs for dmesg ...
	I1216 11:17:09.043686 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1216 11:17:09.060745 1138702 logs.go:123] Gathering logs for describe nodes ...
	I1216 11:17:09.060787 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1216 11:17:09.202048 1138702 logs.go:123] Gathering logs for kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] ...
	I1216 11:17:09.202080 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48"
	I1216 11:17:09.265441 1138702 logs.go:123] Gathering logs for etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] ...
	I1216 11:17:09.265478 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2"
	I1216 11:17:09.321017 1138702 logs.go:123] Gathering logs for coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] ...
	I1216 11:17:09.321050 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e"
	I1216 11:17:09.371014 1138702 logs.go:123] Gathering logs for kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] ...
	I1216 11:17:09.371051 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6"
	I1216 11:17:09.418757 1138702 logs.go:123] Gathering logs for kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] ...
	I1216 11:17:09.418790 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184"
	I1216 11:17:09.458217 1138702 logs.go:123] Gathering logs for kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] ...
	I1216 11:17:09.458245 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180"
	I1216 11:17:09.503090 1138702 logs.go:123] Gathering logs for container status ...
	I1216 11:17:09.503127 1138702 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1216 11:17:09.554217 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:17:09.554245 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1216 11:17:09.554311 1138702 out.go:270] X Problems detected in kubelet:
	W1216 11:17:09.554328 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.956316    1518 reflector.go:158] "Unhandled Error" err="object-\"ingress-nginx\"/\"ingress-nginx-admission\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"ingress-nginx-admission\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"secrets\" in API group \"\" in the namespace \"ingress-nginx\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.554342 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960716    1518 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.554352 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: W1216 11:14:54.960834    1518 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:addons-467441" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-467441' and this object
	W1216 11:17:09.554359 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960864    1518 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	W1216 11:17:09.554367 1138702 out.go:270]   Dec 16 11:14:54 addons-467441 kubelet[1518]: E1216 11:14:54.960895    1518 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-467441\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-467441' and this object" logger="UnhandledError"
	I1216 11:17:09.554381 1138702 out.go:358] Setting ErrFile to fd 2...
	I1216 11:17:09.554388 1138702 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:17:19.566657 1138702 system_pods.go:59] 18 kube-system pods found
	I1216 11:17:19.566695 1138702 system_pods.go:61] "coredns-7c65d6cfc9-q957p" [82d9993b-39f8-4677-b836-c7cd37117b1a] Running
	I1216 11:17:19.566703 1138702 system_pods.go:61] "csi-hostpath-attacher-0" [94788c63-6666-44b4-83c3-7fe1f6ebcaf9] Running
	I1216 11:17:19.566707 1138702 system_pods.go:61] "csi-hostpath-resizer-0" [d21a0e84-86fe-47a9-8ccd-682aa6f7f144] Running
	I1216 11:17:19.567009 1138702 system_pods.go:61] "csi-hostpathplugin-mpt97" [339d76af-f10c-43ce-b0ea-7ae551d3d1a5] Running
	I1216 11:17:19.567021 1138702 system_pods.go:61] "etcd-addons-467441" [f49ade91-d760-43f8-ae39-79b39c0e47a4] Running
	I1216 11:17:19.567026 1138702 system_pods.go:61] "kindnet-xpdrb" [58cae89f-628b-407b-8d36-12e7fdd1244d] Running
	I1216 11:17:19.567032 1138702 system_pods.go:61] "kube-apiserver-addons-467441" [6ce3401b-45d9-4799-9597-95b79f51b386] Running
	I1216 11:17:19.567037 1138702 system_pods.go:61] "kube-controller-manager-addons-467441" [5e691c63-7bec-42d1-bc95-28f433f30b4a] Running
	I1216 11:17:19.567041 1138702 system_pods.go:61] "kube-ingress-dns-minikube" [4350d794-5394-4140-8327-30f5a49dfb05] Running
	I1216 11:17:19.567045 1138702 system_pods.go:61] "kube-proxy-pss99" [ea376084-34b2-4d86-955d-27196e1014e6] Running
	I1216 11:17:19.567048 1138702 system_pods.go:61] "kube-scheduler-addons-467441" [48844348-8139-488d-85ae-7138147160bb] Running
	I1216 11:17:19.567052 1138702 system_pods.go:61] "metrics-server-84c5f94fbc-vwzrq" [702d35be-9a96-4ad2-b0dd-6e3c9ff3d4aa] Running
	I1216 11:17:19.567056 1138702 system_pods.go:61] "nvidia-device-plugin-daemonset-zh27s" [29ad869e-9aed-4717-ab7c-b8ba4cf3c784] Running
	I1216 11:17:19.567060 1138702 system_pods.go:61] "registry-5cc95cd69-f5zh4" [e511e988-2365-410f-8684-de95a39675bf] Running
	I1216 11:17:19.567083 1138702 system_pods.go:61] "registry-proxy-x5969" [ebb5d950-3c97-4dff-b737-8817d4630dcc] Running
	I1216 11:17:19.567090 1138702 system_pods.go:61] "snapshot-controller-56fcc65765-45zdj" [728c876b-c4ad-4289-b1f4-8310ca8a70c6] Running
	I1216 11:17:19.567094 1138702 system_pods.go:61] "snapshot-controller-56fcc65765-smdxv" [feffc5a8-04d1-4848-ba68-da3e522bc18f] Running
	I1216 11:17:19.567097 1138702 system_pods.go:61] "storage-provisioner" [943dc892-0be5-4c97-8093-d793a3d09c44] Running
	I1216 11:17:19.567103 1138702 system_pods.go:74] duration metric: took 11.212300961s to wait for pod list to return data ...
	I1216 11:17:19.567112 1138702 default_sa.go:34] waiting for default service account to be created ...
	I1216 11:17:19.574009 1138702 default_sa.go:45] found service account: "default"
	I1216 11:17:19.574037 1138702 default_sa.go:55] duration metric: took 6.919054ms for default service account to be created ...
	I1216 11:17:19.574048 1138702 system_pods.go:116] waiting for k8s-apps to be running ...
	I1216 11:17:19.584764 1138702 system_pods.go:86] 18 kube-system pods found
	I1216 11:17:19.584799 1138702 system_pods.go:89] "coredns-7c65d6cfc9-q957p" [82d9993b-39f8-4677-b836-c7cd37117b1a] Running
	I1216 11:17:19.584808 1138702 system_pods.go:89] "csi-hostpath-attacher-0" [94788c63-6666-44b4-83c3-7fe1f6ebcaf9] Running
	I1216 11:17:19.584813 1138702 system_pods.go:89] "csi-hostpath-resizer-0" [d21a0e84-86fe-47a9-8ccd-682aa6f7f144] Running
	I1216 11:17:19.584836 1138702 system_pods.go:89] "csi-hostpathplugin-mpt97" [339d76af-f10c-43ce-b0ea-7ae551d3d1a5] Running
	I1216 11:17:19.584845 1138702 system_pods.go:89] "etcd-addons-467441" [f49ade91-d760-43f8-ae39-79b39c0e47a4] Running
	I1216 11:17:19.584850 1138702 system_pods.go:89] "kindnet-xpdrb" [58cae89f-628b-407b-8d36-12e7fdd1244d] Running
	I1216 11:17:19.584858 1138702 system_pods.go:89] "kube-apiserver-addons-467441" [6ce3401b-45d9-4799-9597-95b79f51b386] Running
	I1216 11:17:19.584863 1138702 system_pods.go:89] "kube-controller-manager-addons-467441" [5e691c63-7bec-42d1-bc95-28f433f30b4a] Running
	I1216 11:17:19.584869 1138702 system_pods.go:89] "kube-ingress-dns-minikube" [4350d794-5394-4140-8327-30f5a49dfb05] Running
	I1216 11:17:19.584873 1138702 system_pods.go:89] "kube-proxy-pss99" [ea376084-34b2-4d86-955d-27196e1014e6] Running
	I1216 11:17:19.584879 1138702 system_pods.go:89] "kube-scheduler-addons-467441" [48844348-8139-488d-85ae-7138147160bb] Running
	I1216 11:17:19.584887 1138702 system_pods.go:89] "metrics-server-84c5f94fbc-vwzrq" [702d35be-9a96-4ad2-b0dd-6e3c9ff3d4aa] Running
	I1216 11:17:19.584891 1138702 system_pods.go:89] "nvidia-device-plugin-daemonset-zh27s" [29ad869e-9aed-4717-ab7c-b8ba4cf3c784] Running
	I1216 11:17:19.584895 1138702 system_pods.go:89] "registry-5cc95cd69-f5zh4" [e511e988-2365-410f-8684-de95a39675bf] Running
	I1216 11:17:19.584911 1138702 system_pods.go:89] "registry-proxy-x5969" [ebb5d950-3c97-4dff-b737-8817d4630dcc] Running
	I1216 11:17:19.584920 1138702 system_pods.go:89] "snapshot-controller-56fcc65765-45zdj" [728c876b-c4ad-4289-b1f4-8310ca8a70c6] Running
	I1216 11:17:19.584926 1138702 system_pods.go:89] "snapshot-controller-56fcc65765-smdxv" [feffc5a8-04d1-4848-ba68-da3e522bc18f] Running
	I1216 11:17:19.584930 1138702 system_pods.go:89] "storage-provisioner" [943dc892-0be5-4c97-8093-d793a3d09c44] Running
	I1216 11:17:19.584948 1138702 system_pods.go:126] duration metric: took 10.894324ms to wait for k8s-apps to be running ...
	I1216 11:17:19.584963 1138702 system_svc.go:44] waiting for kubelet service to be running ....
	I1216 11:17:19.585033 1138702 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:17:19.596970 1138702 system_svc.go:56] duration metric: took 11.998812ms WaitForService to wait for kubelet
	I1216 11:17:19.597004 1138702 kubeadm.go:582] duration metric: took 3m11.665061999s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1216 11:17:19.597025 1138702 node_conditions.go:102] verifying NodePressure condition ...
	I1216 11:17:19.600582 1138702 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1216 11:17:19.600618 1138702 node_conditions.go:123] node cpu capacity is 2
	I1216 11:17:19.600632 1138702 node_conditions.go:105] duration metric: took 3.599207ms to run NodePressure ...
	I1216 11:17:19.600644 1138702 start.go:241] waiting for startup goroutines ...
	I1216 11:17:19.600663 1138702 start.go:246] waiting for cluster config update ...
	I1216 11:17:19.600685 1138702 start.go:255] writing updated cluster config ...
	I1216 11:17:19.601023 1138702 ssh_runner.go:195] Run: rm -f paused
	I1216 11:17:19.993279 1138702 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1216 11:17:19.996682 1138702 out.go:177] * Done! kubectl is now configured to use "addons-467441" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.511263324Z" level=info msg="Stopping container: 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f (timeout: 30s)" id=85571a8f-9698-4dbd-b74a-a0359036012d name=/runtime.v1.RuntimeService/StopContainer
	Dec 16 11:22:53 addons-467441 conmon[3257]: conmon 73c742742b84cad24484 <ninfo>: container 3269 exited with status 2
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.677881235Z" level=info msg="Stopped container 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f: default/cloud-spanner-emulator-dc5db94f4-6fvnl/cloud-spanner-emulator" id=85571a8f-9698-4dbd-b74a-a0359036012d name=/runtime.v1.RuntimeService/StopContainer
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.678560249Z" level=info msg="Stopping pod sandbox: 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=a98ce5ed-1966-4c94-bea4-fe48948a40e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.678791423Z" level=info msg="Got pod network &{Name:cloud-spanner-emulator-dc5db94f4-6fvnl Namespace:default ID:9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa UID:4dbe6e1f-a347-404d-96a9-ef7fe48b7632 NetNS:/var/run/netns/6ca9b2c6-f201-4a9a-b278-bb09d0bfb06f Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.678931210Z" level=info msg="Deleting pod default_cloud-spanner-emulator-dc5db94f4-6fvnl from CNI network \"kindnet\" (type=ptp)"
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.699129364Z" level=info msg="Stopped pod sandbox: 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=a98ce5ed-1966-4c94-bea4-fe48948a40e5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.715301293Z" level=info msg="Removing container: 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f" id=20cb80e7-7f5c-4fa7-ace9-8cc8eae0ac08 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 11:22:53 addons-467441 crio[984]: time="2024-12-16 11:22:53.735282576Z" level=info msg="Removed container 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f: default/cloud-spanner-emulator-dc5db94f4-6fvnl/cloud-spanner-emulator" id=20cb80e7-7f5c-4fa7-ace9-8cc8eae0ac08 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.780165878Z" level=info msg="Stopping pod sandbox: 89bdbd92615d8d6512a51e59dae2e78bcaa8dd657a093f360f38c5c6d32fef4e" id=514858b4-c536-4302-abeb-36a6053f4b6b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.780218258Z" level=info msg="Stopped pod sandbox (already stopped): 89bdbd92615d8d6512a51e59dae2e78bcaa8dd657a093f360f38c5c6d32fef4e" id=514858b4-c536-4302-abeb-36a6053f4b6b name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.780534943Z" level=info msg="Removing pod sandbox: 89bdbd92615d8d6512a51e59dae2e78bcaa8dd657a093f360f38c5c6d32fef4e" id=b8de76a5-8642-4f42-82dc-59cc20522934 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.792109371Z" level=info msg="Removed pod sandbox: 89bdbd92615d8d6512a51e59dae2e78bcaa8dd657a093f360f38c5c6d32fef4e" id=b8de76a5-8642-4f42-82dc-59cc20522934 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.792745621Z" level=info msg="Stopping pod sandbox: 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=0e731d4c-3d58-4a03-85ea-610c56df26ce name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.792808774Z" level=info msg="Stopped pod sandbox (already stopped): 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=0e731d4c-3d58-4a03-85ea-610c56df26ce name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.793099909Z" level=info msg="Removing pod sandbox: 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=dbe50f45-2b9f-4f06-a02a-bdd71c6f469d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.804022842Z" level=info msg="Removed pod sandbox: 9b21a75ee64810813c460f69fb6b7bc127a9fed9d47ee75d735984bb5e548ffa" id=dbe50f45-2b9f-4f06-a02a-bdd71c6f469d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.804571718Z" level=info msg="Stopping pod sandbox: a2e85f89937f8fb828aa1c7080ef2f59a7f0e67422881242c9f576522a2088b0" id=e327476a-c62e-45e4-a6cf-95b879cc633f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.804607869Z" level=info msg="Stopped pod sandbox (already stopped): a2e85f89937f8fb828aa1c7080ef2f59a7f0e67422881242c9f576522a2088b0" id=e327476a-c62e-45e4-a6cf-95b879cc633f name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.805047062Z" level=info msg="Removing pod sandbox: a2e85f89937f8fb828aa1c7080ef2f59a7f0e67422881242c9f576522a2088b0" id=898acbcd-0140-45db-8771-0d5766e045f7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.815037867Z" level=info msg="Removed pod sandbox: a2e85f89937f8fb828aa1c7080ef2f59a7f0e67422881242c9f576522a2088b0" id=898acbcd-0140-45db-8771-0d5766e045f7 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.815590887Z" level=info msg="Stopping pod sandbox: 0e6b556703679475b2e16ee03009e906ce68499ab95c5baa178f0a3e0e54a577" id=df37b427-6990-4085-bda0-ed09bfbee652 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.815631829Z" level=info msg="Stopped pod sandbox (already stopped): 0e6b556703679475b2e16ee03009e906ce68499ab95c5baa178f0a3e0e54a577" id=df37b427-6990-4085-bda0-ed09bfbee652 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.816011971Z" level=info msg="Removing pod sandbox: 0e6b556703679475b2e16ee03009e906ce68499ab95c5baa178f0a3e0e54a577" id=5771edf4-83de-4992-9120-159081543af6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 16 11:23:03 addons-467441 crio[984]: time="2024-12-16 11:23:03.827124832Z" level=info msg="Removed pod sandbox: 0e6b556703679475b2e16ee03009e906ce68499ab95c5baa178f0a3e0e54a577" id=5771edf4-83de-4992-9120-159081543af6 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	0ecf3bf56527f       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   2 minutes ago       Running             hello-world-app           0                   08167ca4d8f53       hello-world-app-55bf9c44b4-5984x
	2a6318d7d7c28       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         4 minutes ago       Running             nginx                     0                   559f175560e1e       nginx
	61516daa3bce2       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     6 minutes ago       Running             busybox                   0                   94696fa4b7bb6       busybox
	0e532871b67d2       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   8 minutes ago       Running             metrics-server            0                   eb86c5ec58d0a       metrics-server-84c5f94fbc-vwzrq
	dfc3285cc9ad5       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        8 minutes ago       Running             coredns                   0                   bb6987a88a4a1       coredns-7c65d6cfc9-q957p
	c7daab6e1e325       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        8 minutes ago       Running             storage-provisioner       0                   354f75e8a8fe8       storage-provisioner
	7e63609c52637       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                      9 minutes ago       Running             kindnet-cni               0                   0728522c6e7f8       kindnet-xpdrb
	16934743848f4       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                        9 minutes ago       Running             kube-proxy                0                   40ec3fe26d4a5       kube-proxy-pss99
	2cf6fde1ee7c9       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                        9 minutes ago       Running             kube-controller-manager   0                   01fb0a0cf4715       kube-controller-manager-addons-467441
	1087144bdb483       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                        9 minutes ago       Running             kube-scheduler            0                   c94024647fea1       kube-scheduler-addons-467441
	be2e989c41089       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        9 minutes ago       Running             etcd                      0                   578635ee28890       etcd-addons-467441
	ce98b18aa1ed3       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                        9 minutes ago       Running             kube-apiserver            0                   eb93b0602613e       kube-apiserver-addons-467441
	
	
	==> coredns [dfc3285cc9ad50e22a2305800830e8b6b81d27450828ed41da8d90a5a81c2c8e] <==
	[INFO] 10.244.0.21:52974 - 28013 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000084027s
	[INFO] 10.244.0.21:52974 - 63623 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000092149s
	[INFO] 10.244.0.21:52974 - 56456 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000083156s
	[INFO] 10.244.0.21:52974 - 17990 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000109191s
	[INFO] 10.244.0.21:52974 - 30579 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001275708s
	[INFO] 10.244.0.21:52974 - 47153 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001035567s
	[INFO] 10.244.0.21:52974 - 54311 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000073565s
	[INFO] 10.244.0.21:50763 - 61740 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000117904s
	[INFO] 10.244.0.21:53646 - 1423 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000115492s
	[INFO] 10.244.0.21:50763 - 8591 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000066157s
	[INFO] 10.244.0.21:50763 - 38849 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000110462s
	[INFO] 10.244.0.21:53646 - 49881 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000181788s
	[INFO] 10.244.0.21:50763 - 52760 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000089048s
	[INFO] 10.244.0.21:53646 - 11722 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000048056s
	[INFO] 10.244.0.21:53646 - 59283 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000113564s
	[INFO] 10.244.0.21:50763 - 60438 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000161769s
	[INFO] 10.244.0.21:50763 - 1227 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000060158s
	[INFO] 10.244.0.21:53646 - 45091 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000052955s
	[INFO] 10.244.0.21:53646 - 34813 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000057672s
	[INFO] 10.244.0.21:50763 - 24189 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001735209s
	[INFO] 10.244.0.21:53646 - 45202 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001289115s
	[INFO] 10.244.0.21:50763 - 45285 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001128562s
	[INFO] 10.244.0.21:50763 - 46630 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062497s
	[INFO] 10.244.0.21:53646 - 38643 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001186217s
	[INFO] 10.244.0.21:53646 - 1547 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000068462s
	
	
	==> describe nodes <==
	Name:               addons-467441
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-467441
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=addons-467441
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T11_14_04_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-467441
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 11:14:00 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-467441
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 11:23:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 11:22:11 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 11:22:11 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 11:22:11 +0000   Mon, 16 Dec 2024 11:13:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 11:22:11 +0000   Mon, 16 Dec 2024 11:14:54 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-467441
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d62e87b8c4ce46d688223c02af9759c3
	  System UUID:                201f706f-2d99-4556-8c0e-c0725ad84842
	  Boot ID:                    4589c027-c057-41f4-bde7-e198f2c36aaf
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m24s
	  default                     hello-world-app-55bf9c44b4-5984x         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  default                     nginx                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m40s
	  kube-system                 coredns-7c65d6cfc9-q957p                 100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     9m34s
	  kube-system                 etcd-addons-467441                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         9m41s
	  kube-system                 kindnet-xpdrb                            100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      9m36s
	  kube-system                 kube-apiserver-addons-467441             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 kube-controller-manager-addons-467441    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m43s
	  kube-system                 kube-proxy-pss99                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m35s
	  kube-system                 kube-scheduler-addons-467441             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m41s
	  kube-system                 metrics-server-84c5f94fbc-vwzrq          100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         9m31s
	  kube-system                 storage-provisioner                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 9m29s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  9m49s (x8 over 9m49s)  kubelet          Node addons-467441 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m49s (x8 over 9m49s)  kubelet          Node addons-467441 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m49s (x7 over 9m49s)  kubelet          Node addons-467441 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m41s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 9m41s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  9m41s                  kubelet          Node addons-467441 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m41s                  kubelet          Node addons-467441 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     9m41s                  kubelet          Node addons-467441 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           9m37s                  node-controller  Node addons-467441 event: Registered Node addons-467441 in Controller
	  Normal   NodeReady                8m50s                  kubelet          Node addons-467441 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [be2e989c41089971268e13abf590f1b0bda504564800a352bdf18d9378c9e3f2] <==
	{"level":"warn","ts":"2024-12-16T11:14:09.541714Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.034557Z","time spent":"507.114669ms","remote":"127.0.0.1:35536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3623,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kindnet-xpdrb\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kindnet-xpdrb\" value_size:3575 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.540681Z","caller":"traceutil/trace.go:171","msg":"trace[1863671281] transaction","detail":"{read_only:false; response_revision:346; number_of_response:1; }","duration":"394.972535ms","start":"2024-12-16T11:14:09.145688Z","end":"2024-12-16T11:14:09.540660Z","steps":["trace[1863671281] 'process raft request'  (duration: 310.931457ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.544923Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.145666Z","time spent":"399.189923ms","remote":"127.0.0.1:35536","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":3360,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/pods/kube-system/kube-proxy-pss99\" mod_revision:0 > success:<request_put:<key:\"/registry/pods/kube-system/kube-proxy-pss99\" value_size:3309 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.540832Z","caller":"traceutil/trace.go:171","msg":"trace[1642192314] transaction","detail":"{read_only:false; response_revision:347; number_of_response:1; }","duration":"394.764204ms","start":"2024-12-16T11:14:09.146061Z","end":"2024-12-16T11:14:09.540825Z","steps":["trace[1642192314] 'process raft request'  (duration: 310.617676ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.604243Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.146051Z","time spent":"458.012458ms","remote":"127.0.0.1:35550","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":168,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/serviceaccounts/default/default\" mod_revision:328 > success:<request_put:<key:\"/registry/serviceaccounts/default/default\" value_size:120 >> failure:<request_range:<key:\"/registry/serviceaccounts/default/default\" > >"}
	{"level":"info","ts":"2024-12-16T11:14:09.540851Z","caller":"traceutil/trace.go:171","msg":"trace[1427641176] transaction","detail":"{read_only:false; response_revision:348; number_of_response:1; }","duration":"245.175739ms","start":"2024-12-16T11:14:09.295670Z","end":"2024-12-16T11:14:09.540846Z","steps":["trace[1427641176] 'process raft request'  (duration: 161.045572ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.557962Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"262.076247ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"warn","ts":"2024-12-16T11:14:09.614842Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.295648Z","time spent":"319.145381ms","remote":"127.0.0.1:35446","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":680,"response count":0,"response size":40,"request content":"compare:<target:MOD key:\"/registry/events/kube-system/coredns.1811a3ff698090be\" mod_revision:0 > success:<request_put:<key:\"/registry/events/kube-system/coredns.1811a3ff698090be\" value_size:609 lease:8128033945515139534 >> failure:<>"}
	{"level":"info","ts":"2024-12-16T11:14:09.618278Z","caller":"traceutil/trace.go:171","msg":"trace[1388851801] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:348; }","duration":"322.400996ms","start":"2024-12-16T11:14:09.295862Z","end":"2024-12-16T11:14:09.618263Z","steps":["trace[1388851801] 'agreement among raft nodes before linearized reading'  (duration: 163.632568ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:09.700948Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-16T11:14:09.295841Z","time spent":"405.076134ms","remote":"127.0.0.1:35334","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-12-16T11:14:11.514867Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"163.247397ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/minikube-ingress-dns\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:11.515186Z","caller":"traceutil/trace.go:171","msg":"trace[1048516907] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/minikube-ingress-dns; range_end:; response_count:0; response_revision:378; }","duration":"163.577111ms","start":"2024-12-16T11:14:11.351594Z","end":"2024-12-16T11:14:11.515171Z","steps":["trace[1048516907] 'agreement among raft nodes before linearized reading'  (duration: 81.470156ms)","trace[1048516907] 'range keys from in-memory index tree'  (duration: 81.669133ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T11:14:11.529175Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"190.245424ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/local-path-storage\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:11.529310Z","caller":"traceutil/trace.go:171","msg":"trace[574520632] range","detail":"{range_begin:/registry/namespaces/local-path-storage; range_end:; response_count:0; response_revision:378; }","duration":"203.91795ms","start":"2024-12-16T11:14:11.325374Z","end":"2024-12-16T11:14:11.529292Z","steps":["trace[574520632] 'agreement among raft nodes before linearized reading'  (duration: 107.796445ms)","trace[574520632] 'range keys from in-memory index tree'  (duration: 82.439952ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T11:14:11.529723Z","caller":"traceutil/trace.go:171","msg":"trace[468470555] transaction","detail":"{read_only:false; response_revision:379; number_of_response:1; }","duration":"112.694267ms","start":"2024-12-16T11:14:11.417020Z","end":"2024-12-16T11:14:11.529714Z","steps":["trace[468470555] 'process raft request'  (duration: 91.400032ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638245Z","caller":"traceutil/trace.go:171","msg":"trace[1024051239] transaction","detail":"{read_only:false; response_revision:427; number_of_response:1; }","duration":"116.165302ms","start":"2024-12-16T11:14:12.522066Z","end":"2024-12-16T11:14:12.638231Z","steps":["trace[1024051239] 'process raft request'  (duration: 36.961552ms)","trace[1024051239] 'compare'  (duration: 78.766617ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-16T11:14:12.638424Z","caller":"traceutil/trace.go:171","msg":"trace[1493401482] transaction","detail":"{read_only:false; response_revision:428; number_of_response:1; }","duration":"116.177282ms","start":"2024-12-16T11:14:12.522240Z","end":"2024-12-16T11:14:12.638418Z","steps":["trace[1493401482] 'process raft request'  (duration: 115.638908ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638509Z","caller":"traceutil/trace.go:171","msg":"trace[576796796] transaction","detail":"{read_only:false; response_revision:429; number_of_response:1; }","duration":"116.224887ms","start":"2024-12-16T11:14:12.522279Z","end":"2024-12-16T11:14:12.638504Z","steps":["trace[576796796] 'process raft request'  (duration: 115.631647ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-16T11:14:12.638590Z","caller":"traceutil/trace.go:171","msg":"trace[1982304804] linearizableReadLoop","detail":"{readStateIndex:438; appliedIndex:437; }","duration":"116.392866ms","start":"2024-12-16T11:14:12.522191Z","end":"2024-12-16T11:14:12.638584Z","steps":["trace[1982304804] 'read index received'  (duration: 24.89266ms)","trace[1982304804] 'applied index is now lower than readState.Index'  (duration: 91.499525ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-16T11:14:12.638826Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"116.619093ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:12.638859Z","caller":"traceutil/trace.go:171","msg":"trace[1038037122] range","detail":"{range_begin:/registry/serviceaccounts/local-path-storage/local-path-provisioner-service-account; range_end:; response_count:0; response_revision:430; }","duration":"116.664425ms","start":"2024-12-16T11:14:12.522187Z","end":"2024-12-16T11:14:12.638852Z","steps":["trace[1038037122] 'agreement among raft nodes before linearized reading'  (duration: 116.603241ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:12.704649Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"138.540233ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/resourcequotas\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-16T11:14:12.704807Z","caller":"traceutil/trace.go:171","msg":"trace[1706744554] range","detail":"{range_begin:/registry/resourcequotas; range_end:; response_count:0; response_revision:434; }","duration":"138.710887ms","start":"2024-12-16T11:14:12.566084Z","end":"2024-12-16T11:14:12.704795Z","steps":["trace[1706744554] 'agreement among raft nodes before linearized reading'  (duration: 138.416035ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-16T11:14:12.705747Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"148.011265ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/replicasets/kube-system/coredns-7c65d6cfc9\" ","response":"range_response_count:1 size:3797"}
	{"level":"info","ts":"2024-12-16T11:14:12.705793Z","caller":"traceutil/trace.go:171","msg":"trace[1139679079] range","detail":"{range_begin:/registry/replicasets/kube-system/coredns-7c65d6cfc9; range_end:; response_count:1; response_revision:434; }","duration":"148.062653ms","start":"2024-12-16T11:14:12.557721Z","end":"2024-12-16T11:14:12.705784Z","steps":["trace[1139679079] 'agreement among raft nodes before linearized reading'  (duration: 147.982647ms)"],"step_count":1}
	
	
	==> kernel <==
	 11:23:44 up  8:06,  0 users,  load average: 0.54, 0.78, 1.65
	Linux addons-467441 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [7e63609c526376be42c32a8b4675e7e85b8b7926c5674de81fc7064aa893b180] <==
	I1216 11:21:44.730754       1 main.go:301] handling current node
	I1216 11:21:54.730830       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:21:54.730866       1 main.go:301] handling current node
	I1216 11:22:04.730741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:04.730784       1 main.go:301] handling current node
	I1216 11:22:14.730051       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:14.730110       1 main.go:301] handling current node
	I1216 11:22:24.730464       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:24.730496       1 main.go:301] handling current node
	I1216 11:22:34.731031       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:34.731092       1 main.go:301] handling current node
	I1216 11:22:44.730069       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:44.730109       1 main.go:301] handling current node
	I1216 11:22:54.731041       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:22:54.731076       1 main.go:301] handling current node
	I1216 11:23:04.730957       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:23:04.730999       1 main.go:301] handling current node
	I1216 11:23:14.730681       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:23:14.730714       1 main.go:301] handling current node
	I1216 11:23:24.730828       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:23:24.730862       1 main.go:301] handling current node
	I1216 11:23:34.730790       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:23:34.730827       1 main.go:301] handling current node
	I1216 11:23:44.732837       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:23:44.732868       1 main.go:301] handling current node
	
	
	==> kube-apiserver [ce98b18aa1ed3299204710fa4aecc87aad2125b7765a5fb4112a594d7957de48] <==
	I1216 11:16:45.684426       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1216 11:17:31.034061       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50534: use of closed network connection
	E1216 11:17:31.432998       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:50570: use of closed network connection
	I1216 11:17:40.769727       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.30.107"}
	I1216 11:18:25.317489       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1216 11:18:45.022176       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.022992       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.076980       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.077200       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.106361       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.106821       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.170734       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.171333       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1216 11:18:45.243047       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1216 11:18:45.243096       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1216 11:18:46.171574       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1216 11:18:46.244337       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1216 11:18:46.290025       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1216 11:18:58.770239       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1216 11:18:59.910130       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1216 11:19:04.342254       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1216 11:19:04.667148       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.110.106.24"}
	I1216 11:21:24.960568       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.97.65.168"}
	E1216 11:21:29.533843       1 watch.go:250] "Unhandled Error" err="http2: stream closed" logger="UnhandledError"
	E1216 11:22:01.381136       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	
	
	==> kube-controller-manager [2cf6fde1ee7c9a9dd2473916be3c1f67ffb0c9877a501651e16c405e42fdc42b] <==
	W1216 11:21:56.046897       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:21:56.046960       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:22:06.085561       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:06.085604       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 11:22:11.989334       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-467441"
	W1216 11:22:26.055223       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:26.055264       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:22:32.010278       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:32.010324       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 11:22:33.708529       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="local-path-storage"
	I1216 11:22:35.284106       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="yakd-dashboard/yakd-dashboard-67d98fc6b" duration="4.956µs"
	I1216 11:22:45.392481       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="yakd-dashboard"
	W1216 11:22:48.995185       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:48.995227       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:22:53.323913       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:53.324032       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1216 11:22:53.487277       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cloud-spanner-emulator-dc5db94f4" duration="8.148µs"
	W1216 11:22:56.859965       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:22:56.860093       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:23:08.835510       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:23:08.835632       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:23:26.460172       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:23:26.460312       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1216 11:23:35.910988       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1216 11:23:35.911030       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [16934743848f4975744a77abf304b3a77b61c059e8028249a4797ca7d0fe3184] <==
	I1216 11:14:13.640099       1 server_linux.go:66] "Using iptables proxy"
	I1216 11:14:14.782967       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 11:14:14.783123       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 11:14:15.042773       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 11:14:15.042850       1 server_linux.go:169] "Using iptables Proxier"
	I1216 11:14:15.051817       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 11:14:15.052303       1 server.go:483] "Version info" version="v1.31.2"
	I1216 11:14:15.057948       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:14:15.072587       1 config.go:199] "Starting service config controller"
	I1216 11:14:15.072621       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 11:14:15.072652       1 config.go:105] "Starting endpoint slice config controller"
	I1216 11:14:15.072657       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 11:14:15.073080       1 config.go:328] "Starting node config controller"
	I1216 11:14:15.073101       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 11:14:15.173241       1 shared_informer.go:320] Caches are synced for node config
	I1216 11:14:15.173368       1 shared_informer.go:320] Caches are synced for service config
	I1216 11:14:15.173395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [1087144bdb483abb8ae166e63a9d0bb8e7f790dfe4e00eab621499957bee00c6] <==
	W1216 11:14:01.634454       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635670       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634561       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 11:14:01.635696       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634617       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 11:14:01.635725       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634659       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1216 11:14:01.635755       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634761       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 11:14:01.635788       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634806       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 11:14:01.635817       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634857       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 11:14:01.635838       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634899       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635868       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.634999       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1216 11:14:01.635897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635135       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 11:14:01.635918       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635386       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 11:14:01.635948       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:14:01.635408       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 11:14:01.635967       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I1216 11:14:02.921142       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 11:22:47 addons-467441 kubelet[1518]: I1216 11:22:47.726338    1518 scope.go:117] "RemoveContainer" containerID="c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f"
	Dec 16 11:22:47 addons-467441 kubelet[1518]: E1216 11:22:47.726713    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f\": container with ID starting with c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f not found: ID does not exist" containerID="c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f"
	Dec 16 11:22:47 addons-467441 kubelet[1518]: I1216 11:22:47.726755    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f"} err="failed to get container status \"c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f\": rpc error: code = NotFound desc = could not find container \"c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f\": container with ID starting with c8a4ce073d9325501f70ab34276010923621b100bbaec8840b069eb4d8fe1e2f not found: ID does not exist"
	Dec 16 11:22:49 addons-467441 kubelet[1518]: I1216 11:22:49.220898    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29ad869e-9aed-4717-ab7c-b8ba4cf3c784" path="/var/lib/kubelet/pods/29ad869e-9aed-4717-ab7c-b8ba4cf3c784/volumes"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: E1216 11:22:53.451219    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348173450936225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: E1216 11:22:53.451253    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348173450936225,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.714149    1518 scope.go:117] "RemoveContainer" containerID="73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.735549    1518 scope.go:117] "RemoveContainer" containerID="73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: E1216 11:22:53.735952    1518 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f\": container with ID starting with 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f not found: ID does not exist" containerID="73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.735998    1518 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f"} err="failed to get container status \"73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f\": rpc error: code = NotFound desc = could not find container \"73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f\": container with ID starting with 73c742742b84cad2448447250e51d24feee3e38f5f3c3cb9a2cb78c9132b363f not found: ID does not exist"
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.863707    1518 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-74bhn\" (UniqueName: \"kubernetes.io/projected/4dbe6e1f-a347-404d-96a9-ef7fe48b7632-kube-api-access-74bhn\") pod \"4dbe6e1f-a347-404d-96a9-ef7fe48b7632\" (UID: \"4dbe6e1f-a347-404d-96a9-ef7fe48b7632\") "
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.867991    1518 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4dbe6e1f-a347-404d-96a9-ef7fe48b7632-kube-api-access-74bhn" (OuterVolumeSpecName: "kube-api-access-74bhn") pod "4dbe6e1f-a347-404d-96a9-ef7fe48b7632" (UID: "4dbe6e1f-a347-404d-96a9-ef7fe48b7632"). InnerVolumeSpecName "kube-api-access-74bhn". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Dec 16 11:22:53 addons-467441 kubelet[1518]: I1216 11:22:53.964296    1518 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-74bhn\" (UniqueName: \"kubernetes.io/projected/4dbe6e1f-a347-404d-96a9-ef7fe48b7632-kube-api-access-74bhn\") on node \"addons-467441\" DevicePath \"\""
	Dec 16 11:22:55 addons-467441 kubelet[1518]: I1216 11:22:55.220584    1518 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4dbe6e1f-a347-404d-96a9-ef7fe48b7632" path="/var/lib/kubelet/pods/4dbe6e1f-a347-404d-96a9-ef7fe48b7632/volumes"
	Dec 16 11:23:03 addons-467441 kubelet[1518]: E1216 11:23:03.453921    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348183453663021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:03 addons-467441 kubelet[1518]: E1216 11:23:03.453971    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348183453663021,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:13 addons-467441 kubelet[1518]: E1216 11:23:13.456909    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348193456636551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:13 addons-467441 kubelet[1518]: E1216 11:23:13.456953    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348193456636551,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:23 addons-467441 kubelet[1518]: E1216 11:23:23.459232    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348203459010689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:23 addons-467441 kubelet[1518]: E1216 11:23:23.459270    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348203459010689,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:33 addons-467441 kubelet[1518]: E1216 11:23:33.461950    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348213461724172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:33 addons-467441 kubelet[1518]: E1216 11:23:33.461988    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348213461724172,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:39 addons-467441 kubelet[1518]: I1216 11:23:39.219126    1518 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 16 11:23:43 addons-467441 kubelet[1518]: E1216 11:23:43.465112    1518 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348223464888514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:23:43 addons-467441 kubelet[1518]: E1216 11:23:43.465149    1518 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348223464888514,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [c7daab6e1e325650b602568d5a975591b6abd08f1dd2994cac6ed61bbdfd0ad6] <==
	I1216 11:14:55.901330       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 11:14:55.913941       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 11:14:55.914067       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 11:14:55.926532       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 11:14:55.926711       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153!
	I1216 11:14:55.927796       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"13c55a31-277f-4961-8ce7-f8d1fb0f723f", APIVersion:"v1", ResourceVersion:"926", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153 became leader
	I1216 11:14:56.027217       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-467441_7f220d40-98bf-4075-86ad-2616bdfe9153!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-467441 -n addons-467441
helpers_test.go:261: (dbg) Run:  kubectl --context addons-467441 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (349.46s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (188.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [a2473934-a93a-45de-b2c3-d95f41211e3a] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004631755s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-300067 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-300067 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-300067 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-300067 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9] Pending
helpers_test.go:344: "sp-pod" [8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:329: TestFunctional/parallel/PersistentVolumeClaim: WARNING: pod list for "default" "test=storage-provisioner" returned: client rate limiter Wait returned an error: context deadline exceeded
functional_test_pvc_test.go:130: ***** TestFunctional/parallel/PersistentVolumeClaim: pod "test=storage-provisioner" failed to start within 3m0s: context deadline exceeded ****
functional_test_pvc_test.go:130: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-300067 -n functional-300067
functional_test_pvc_test.go:130: TestFunctional/parallel/PersistentVolumeClaim: showing logs for failed pods as of 2024-12-16 11:30:11.100105057 +0000 UTC m=+1030.620647730
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-300067 describe po sp-pod -n default
functional_test_pvc_test.go:130: (dbg) kubectl --context functional-300067 describe po sp-pod -n default:
Name:             sp-pod
Namespace:        default
Priority:         0
Service Account:  default
Node:             functional-300067/192.168.49.2
Start Time:       Mon, 16 Dec 2024 11:27:10 +0000
Labels:           test=storage-provisioner
Annotations:      <none>
Status:           Pending
IP:               10.244.0.5
IPs:
IP:  10.244.0.5
Containers:
myfrontend:
Container ID:   
Image:          docker.io/nginx
Image ID:       
Port:           <none>
Host Port:      <none>
State:          Waiting
Reason:       ImagePullBackOff
Ready:          False
Restart Count:  0
Environment:    <none>
Mounts:
/tmp/mount from mypd (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2s5lh (ro)
Conditions:
Type                        Status
PodReadyToStartContainers   True 
Initialized                 True 
Ready                       False 
ContainersReady             False 
PodScheduled                True 
Volumes:
mypd:
Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName:  myclaim
ReadOnly:   false
kube-api-access-2s5lh:
Type:                    Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds:  3607
ConfigMapName:           kube-root-ca.crt
ConfigMapOptional:       <nil>
DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type     Reason     Age                  From               Message
----     ------     ----                 ----               -------
Normal   Scheduled  3m1s                 default-scheduler  Successfully assigned default/sp-pod to functional-300067
Warning  Failed     2m29s                kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     106s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Warning  Failed     53s (x3 over 2m29s)  kubelet            Error: ErrImagePull
Warning  Failed     53s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
Normal   BackOff    27s (x4 over 2m29s)  kubelet            Back-off pulling image "docker.io/nginx"
Warning  Failed     27s (x4 over 2m29s)  kubelet            Error: ImagePullBackOff
Normal   Pulling    12s (x4 over 3m)     kubelet            Pulling image "docker.io/nginx"
functional_test_pvc_test.go:130: (dbg) Run:  kubectl --context functional-300067 logs sp-pod -n default
functional_test_pvc_test.go:130: (dbg) Non-zero exit: kubectl --context functional-300067 logs sp-pod -n default: exit status 1 (99.566894ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): container "myfrontend" in pod "sp-pod" is waiting to start: trying and failing to pull image

                                                
                                                
** /stderr **
functional_test_pvc_test.go:130: kubectl --context functional-300067 logs sp-pod -n default: exit status 1
functional_test_pvc_test.go:131: failed waiting for pod: test=storage-provisioner within 3m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect functional-300067
helpers_test.go:235: (dbg) docker inspect functional-300067:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517",
	        "Created": "2024-12-16T11:24:53.926727204Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 1156176,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-16T11:24:54.082989761Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:02e8be8b1127faa30f09fff745d2a6d385248178d204468bf667a69a71dbf447",
	        "ResolvConfPath": "/var/lib/docker/containers/813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517/hostname",
	        "HostsPath": "/var/lib/docker/containers/813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517/hosts",
	        "LogPath": "/var/lib/docker/containers/813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517/813e54fd9cc5d888baafc5b16cf9ac2d8a55297b7e3bd608f34dd31948dc8517-json.log",
	        "Name": "/functional-300067",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "functional-300067:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "functional-300067",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/e68680d79a74de8ccb95a179bcb7a882d85f802b3811f882fa80a53d3970c320-init/diff:/var/lib/docker/overlay2/d13e29c6821a56996707870a44a8892ca6c52b8aaf1d7542bba33ae7dbaaadff/diff",
	                "MergedDir": "/var/lib/docker/overlay2/e68680d79a74de8ccb95a179bcb7a882d85f802b3811f882fa80a53d3970c320/merged",
	                "UpperDir": "/var/lib/docker/overlay2/e68680d79a74de8ccb95a179bcb7a882d85f802b3811f882fa80a53d3970c320/diff",
	                "WorkDir": "/var/lib/docker/overlay2/e68680d79a74de8ccb95a179bcb7a882d85f802b3811f882fa80a53d3970c320/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "functional-300067",
	                "Source": "/var/lib/docker/volumes/functional-300067/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "functional-300067",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8441/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "functional-300067",
	                "name.minikube.sigs.k8s.io": "functional-300067",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "7d36dda728eca0de43de21969266b2f8c5fa7da4bdd3d4f6b3cdb231746407b7",
	            "SandboxKey": "/var/run/docker/netns/7d36dda728ec",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34251"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34252"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34255"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34253"
	                    }
	                ],
	                "8441/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "34254"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "functional-300067": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "748fcbc83933f6cdb95b89f191215f186c0249647ce94c26aca7c1a402e91eb5",
	                    "EndpointID": "c59414da217f21011f18698cdbddc005962ce7c0e3b07ad5370275454b857f8d",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "functional-300067",
	                        "813e54fd9cc5"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p functional-300067 -n functional-300067
helpers_test.go:244: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 logs -n 25: (1.830339758s)
helpers_test.go:252: TestFunctional/parallel/PersistentVolumeClaim logs: 
-- stdout --
	
	==> Audit <==
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                 Args                                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| image          | functional-300067 image load --daemon                                | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | kicbase/echo-server:functional-300067                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067 image ls                                           | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	| image          | functional-300067 image save kicbase/echo-server:functional-300067   | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067 image rm                                           | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | kicbase/echo-server:functional-300067                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067 image ls                                           | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	| image          | functional-300067 image load                                         | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067 image ls                                           | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	| image          | functional-300067 image save --daemon                                | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | kicbase/echo-server:functional-300067                                |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /etc/test/nested/copy/1137938/hosts                                  |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /etc/ssl/certs/1137938.pem                                           |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /usr/share/ca-certificates/1137938.pem                               |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /etc/ssl/certs/51391683.0                                            |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /etc/ssl/certs/11379382.pem                                          |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /usr/share/ca-certificates/11379382.pem                              |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh sudo cat                                       | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | /etc/ssl/certs/3ec20f2e.0                                            |                   |         |         |                     |                     |
	| image          | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | image ls --format short                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | image ls --format yaml                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| ssh            | functional-300067 ssh pgrep                                          | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC |                     |
	|                | buildkitd                                                            |                   |         |         |                     |                     |
	| image          | functional-300067 image build -t                                     | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | localhost/my-image:functional-300067                                 |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                     |                   |         |         |                     |                     |
	| image          | functional-300067 image ls                                           | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	| image          | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | image ls --format json                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| image          | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | image ls --format table                                              |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                    |                   |         |         |                     |                     |
	| update-context | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	| update-context | functional-300067                                                    | functional-300067 | jenkins | v1.34.0 | 16 Dec 24 11:29 UTC | 16 Dec 24 11:29 UTC |
	|                | update-context                                                       |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                               |                   |         |         |                     |                     |
	|----------------|----------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:28:50
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:28:50.878882 1166827 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:28:50.879017 1166827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.879028 1166827 out.go:358] Setting ErrFile to fd 2...
	I1216 11:28:50.879034 1166827 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.879396 1166827 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:28:50.879827 1166827 out.go:352] Setting JSON to false
	I1216 11:28:50.880876 1166827 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29476,"bootTime":1734319055,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:28:50.880982 1166827 start.go:139] virtualization:  
	I1216 11:28:50.884391 1166827 out.go:177] * [functional-300067] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 11:28:50.887254 1166827 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:28:50.887362 1166827 notify.go:220] Checking for updates...
	I1216 11:28:50.892943 1166827 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:28:50.895898 1166827 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:28:50.898769 1166827 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:28:50.901758 1166827 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 11:28:50.904681 1166827 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:28:50.908082 1166827 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:28:50.908665 1166827 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:28:50.938529 1166827 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:28:50.938702 1166827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:28:51.009670 1166827 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-12-16 11:28:50.994508168 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:28:51.009790 1166827 docker.go:318] overlay module found
	I1216 11:28:51.012926 1166827 out.go:177] * Using the docker driver based on existing profile
	I1216 11:28:51.015815 1166827 start.go:297] selected driver: docker
	I1216 11:28:51.015841 1166827 start.go:901] validating driver "docker" against &{Name:functional-300067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-300067 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:28:51.015963 1166827 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:28:51.016077 1166827 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:28:51.079997 1166827 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-12-16 11:28:51.071097654 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:28:51.080479 1166827 cni.go:84] Creating CNI manager for ""
	I1216 11:28:51.080533 1166827 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:28:51.080590 1166827 start.go:340] cluster config:
	{Name:functional-300067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-300067 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Containe
rRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker Bin
aryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:28:51.083864 1166827 out.go:177] * dry-run validation complete!
	
	
	==> CRI-O <==
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.368966813Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a,RepoTags:[],RepoDigests:[docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a],Size_:42263767,Uid:nil,Username:nonroot,Spec:nil,},Info:map[string]string{},}" id=0826dc58-c88c-4bae-aa18-b8fc95e05009 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.370710194Z" level=info msg="Creating container: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-fthh6/dashboard-metrics-scraper" id=6edd16c8-b2a5-4711-a43a-21d90b2447bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.370815881Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.391061262Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/0823a1b1bd120db5c0d1a64edb9a840d65e0df95b1f5db3cd1d64c36cffbaf82/merged/etc/group: no such file or directory"
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.435303303Z" level=info msg="Created container 195fe45212fba23b0b702705095b4767f83e2f8e35a1df449cea83a07df27ae3: kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-fthh6/dashboard-metrics-scraper" id=6edd16c8-b2a5-4711-a43a-21d90b2447bb name=/runtime.v1.RuntimeService/CreateContainer
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.436265691Z" level=info msg="Starting container: 195fe45212fba23b0b702705095b4767f83e2f8e35a1df449cea83a07df27ae3" id=fc9b40e4-bc52-4d8b-a826-170e2b2a27d3 name=/runtime.v1.RuntimeService/StartContainer
	Dec 16 11:29:24 functional-300067 crio[4144]: time="2024-12-16 11:29:24.443701898Z" level=info msg="Started container" PID=6576 containerID=195fe45212fba23b0b702705095b4767f83e2f8e35a1df449cea83a07df27ae3 description=kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-fthh6/dashboard-metrics-scraper id=fc9b40e4-bc52-4d8b-a826-170e2b2a27d3 name=/runtime.v1.RuntimeService/StartContainer sandboxID=a48274fd34d53ead81f06e972a754182293db16468fcb127abcb084cce7fdc88
	Dec 16 11:29:29 functional-300067 crio[4144]: time="2024-12-16 11:29:29.381877269Z" level=info msg="Checking image status: docker.io/nginx:latest" id=d5a59fef-1596-447d-b256-3cd46f8818e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:29 functional-300067 crio[4144]: time="2024-12-16 11:29:29.382098990Z" level=info msg="Image docker.io/nginx:latest not found" id=d5a59fef-1596-447d-b256-3cd46f8818e5 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:33 functional-300067 crio[4144]: time="2024-12-16 11:29:33.181914118Z" level=info msg="Checking image status: kicbase/echo-server:functional-300067" id=d804a0fc-1a2a-4519-8644-9c80db6da30b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:33 functional-300067 crio[4144]: time="2024-12-16 11:29:33.217598324Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-300067" id=9c475339-8070-4e25-b8e5-6a24a9837f5b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:33 functional-300067 crio[4144]: time="2024-12-16 11:29:33.217834994Z" level=info msg="Image docker.io/kicbase/echo-server:functional-300067 not found" id=9c475339-8070-4e25-b8e5-6a24a9837f5b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:33 functional-300067 crio[4144]: time="2024-12-16 11:29:33.254336558Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-300067" id=2fe7c05d-436b-4001-850e-8cdadb2b3665 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:33 functional-300067 crio[4144]: time="2024-12-16 11:29:33.254571825Z" level=info msg="Image localhost/kicbase/echo-server:functional-300067 not found" id=2fe7c05d-436b-4001-850e-8cdadb2b3665 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:36 functional-300067 crio[4144]: time="2024-12-16 11:29:36.831449658Z" level=info msg="Checking image status: kicbase/echo-server:functional-300067" id=d9c5c61c-a6b3-4322-beba-354b582177b2 name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:36 functional-300067 crio[4144]: time="2024-12-16 11:29:36.869889526Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:functional-300067" id=0df658cf-e02b-4255-952e-5b5020a8331d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:36 functional-300067 crio[4144]: time="2024-12-16 11:29:36.870124261Z" level=info msg="Image docker.io/kicbase/echo-server:functional-300067 not found" id=0df658cf-e02b-4255-952e-5b5020a8331d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:36 functional-300067 crio[4144]: time="2024-12-16 11:29:36.905678427Z" level=info msg="Checking image status: localhost/kicbase/echo-server:functional-300067" id=8dfde4ce-1dd1-4f8f-ba29-eb9780b3fdae name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:36 functional-300067 crio[4144]: time="2024-12-16 11:29:36.905890253Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[localhost/kicbase/echo-server:functional-300067],RepoDigests:[localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a],Size_:4788229,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=8dfde4ce-1dd1-4f8f-ba29-eb9780b3fdae name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:44 functional-300067 crio[4144]: time="2024-12-16 11:29:44.382493734Z" level=info msg="Checking image status: docker.io/nginx:latest" id=7ee3d46f-3b1d-4235-ac6a-9e3a3f7c9d6d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:44 functional-300067 crio[4144]: time="2024-12-16 11:29:44.382715866Z" level=info msg="Image docker.io/nginx:latest not found" id=7ee3d46f-3b1d-4235-ac6a-9e3a3f7c9d6d name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:59 functional-300067 crio[4144]: time="2024-12-16 11:29:59.381966680Z" level=info msg="Checking image status: docker.io/nginx:latest" id=08966761-b5fe-48a1-8034-2a07572cc63b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:59 functional-300067 crio[4144]: time="2024-12-16 11:29:59.382217849Z" level=info msg="Image docker.io/nginx:latest not found" id=08966761-b5fe-48a1-8034-2a07572cc63b name=/runtime.v1.ImageService/ImageStatus
	Dec 16 11:29:59 functional-300067 crio[4144]: time="2024-12-16 11:29:59.383048498Z" level=info msg="Pulling image: docker.io/nginx:latest" id=591dfe0b-041f-431d-bd46-a02ad0ddebc6 name=/runtime.v1.ImageService/PullImage
	Dec 16 11:29:59 functional-300067 crio[4144]: time="2024-12-16 11:29:59.385233813Z" level=info msg="Trying to access \"docker.io/library/nginx:latest\""
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                            CREATED              STATE               NAME                        ATTEMPT             POD ID              POD
	195fe45212fba       docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c   48 seconds ago       Running             dashboard-metrics-scraper   0                   a48274fd34d53       dashboard-metrics-scraper-c5db448b4-fthh6
	4efc16c1b5d3e       docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93         49 seconds ago       Running             kubernetes-dashboard        0                   588cdba1e80c6       kubernetes-dashboard-695b96c756-tn6sz
	a5a841013a394       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e              About a minute ago   Exited              mount-munger                0                   4c1e931c6d531       busybox-mount
	49075b951c449       72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb                                                 2 minutes ago        Running             echoserver-arm              0                   af3abf15bff23       hello-node-64b4f8f9ff-7zbmm
	23f66d534bbca       registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5           2 minutes ago        Running             echoserver-arm              0                   bac345af6e7bb       hello-node-connect-65d86f57f4-g55vv
	5f355b45a0992       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                  3 minutes ago        Running             nginx                       0                   e88ce8490340a       nginx-svc
	a2904690bcd6c       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                 3 minutes ago        Running             kube-proxy                  2                   58a48f88ddad2       kube-proxy-pdtp6
	34d5f98ca25f1       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                 3 minutes ago        Running             coredns                     2                   e2f6c6493ff08       coredns-7c65d6cfc9-kls29
	4ab8b00e38ec4       2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903                                                 3 minutes ago        Running             kindnet-cni                 2                   0084754f583b4       kindnet-9h7ss
	a213dcf478b1c       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 3 minutes ago        Running             storage-provisioner         2                   86338fe70c5d7       storage-provisioner
	0498cc9d82807       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                 3 minutes ago        Running             kube-apiserver              0                   9a36399b5084c       kube-apiserver-functional-300067
	ad7af82987b30       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                 3 minutes ago        Running             kube-controller-manager     2                   5b2d591444fe6       kube-controller-manager-functional-300067
	bbbb5a0ce5c9f       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                 3 minutes ago        Running             etcd                        2                   7812d5c069ee9       etcd-functional-300067
	aa6808392bf75       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                 3 minutes ago        Running             kube-scheduler              2                   f9281deb5eef5       kube-scheduler-functional-300067
	8a9aec037c214       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                 4 minutes ago        Exited              kube-controller-manager     1                   5b2d591444fe6       kube-controller-manager-functional-300067
	f6db253810c71       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                 4 minutes ago        Exited              etcd                        1                   7812d5c069ee9       etcd-functional-300067
	f1845742530a5       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                 4 minutes ago        Exited              coredns                     1                   e2f6c6493ff08       coredns-7c65d6cfc9-kls29
	d6e57ed5540a0       2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903                                                 4 minutes ago        Exited              kindnet-cni                 1                   0084754f583b4       kindnet-9h7ss
	c3759f2b366b3       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                 4 minutes ago        Exited              storage-provisioner         1                   86338fe70c5d7       storage-provisioner
	5ff19c26e51eb       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                 4 minutes ago        Exited              kube-proxy                  1                   58a48f88ddad2       kube-proxy-pdtp6
	5d8ea4ba78194       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                 4 minutes ago        Exited              kube-scheduler              1                   f9281deb5eef5       kube-scheduler-functional-300067
	
	
	==> coredns [34d5f98ca25f15814fbe7924059297ff38df204126c596cc07b13020095eb1d1] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:47880 - 56181 "HINFO IN 5516906071786832299.2515461478211119761. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.043190749s
	
	
	==> coredns [f1845742530a5a17feb0ca7532810ad3c8eb765aa0979b47ceb8b295934512ae] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 05e3eaddc414b2d71a69b2e2bc6f2681fc1f4d04bcdd3acc1a41457bb7db518208b95ddfc4c9fffedc59c25a8faf458be1af4915a4a3c0d6777cb7a346bc5d86
	CoreDNS-1.11.3
	linux/arm64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48584 - 43346 "HINFO IN 5060669752035849292.4864797719738980788. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.026033569s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               functional-300067
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=functional-300067
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=22da80be3b90f71512d84256b3df4ef76bd13ff8
	                    minikube.k8s.io/name=functional-300067
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_16T11_25_18_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 16 Dec 2024 11:25:15 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  functional-300067
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 16 Dec 2024 11:30:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 16 Dec 2024 11:30:11 +0000   Mon, 16 Dec 2024 11:25:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 16 Dec 2024 11:30:11 +0000   Mon, 16 Dec 2024 11:25:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 16 Dec 2024 11:30:11 +0000   Mon, 16 Dec 2024 11:25:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 16 Dec 2024 11:30:11 +0000   Mon, 16 Dec 2024 11:25:35 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    functional-300067
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 d66525f911d742d08b21dc3a81ee6532
	  System UUID:                6641ade2-9d3e-4861-b288-8ee929012daa
	  Boot ID:                    4589c027-c057-41f4-bde7-e198f2c36aaf
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (14 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     hello-node-64b4f8f9ff-7zbmm                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	  default                     hello-node-connect-65d86f57f4-g55vv          0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m57s
	  default                     nginx-svc                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m8s
	  default                     sp-pod                                       0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m2s
	  kube-system                 coredns-7c65d6cfc9-kls29                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     4m50s
	  kube-system                 etcd-functional-300067                       100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         4m55s
	  kube-system                 kindnet-9h7ss                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      4m50s
	  kube-system                 kube-apiserver-functional-300067             250m (12%)    0 (0%)      0 (0%)           0 (0%)         3m35s
	  kube-system                 kube-controller-manager-functional-300067    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 kube-proxy-pdtp6                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m50s
	  kube-system                 kube-scheduler-functional-300067             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m55s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m49s
	  kubernetes-dashboard        dashboard-metrics-scraper-c5db448b4-fthh6    0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	  kubernetes-dashboard        kubernetes-dashboard-695b96c756-tn6sz        0 (0%)        0 (0%)      0 (0%)           0 (0%)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                850m (42%)  100m (5%)
	  memory             220Mi (2%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m48s                  kube-proxy       
	  Normal   Starting                 3m34s                  kube-proxy       
	  Normal   Starting                 4m18s                  kube-proxy       
	  Warning  CgroupV1                 4m55s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  4m55s                  kubelet          Node functional-300067 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m55s                  kubelet          Node functional-300067 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m55s                  kubelet          Node functional-300067 status is now: NodeHasSufficientPID
	  Normal   Starting                 4m55s                  kubelet          Starting kubelet.
	  Normal   RegisteredNode           4m51s                  node-controller  Node functional-300067 event: Registered Node functional-300067 in Controller
	  Normal   NodeReady                4m37s                  kubelet          Node functional-300067 status is now: NodeReady
	  Normal   RegisteredNode           4m15s                  node-controller  Node functional-300067 event: Registered Node functional-300067 in Controller
	  Normal   NodeHasSufficientMemory  3m40s (x8 over 3m40s)  kubelet          Node functional-300067 status is now: NodeHasSufficientMemory
	  Warning  CgroupV1                 3m40s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   Starting                 3m40s                  kubelet          Starting kubelet.
	  Normal   NodeHasNoDiskPressure    3m40s (x8 over 3m40s)  kubelet          Node functional-300067 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m40s (x7 over 3m40s)  kubelet          Node functional-300067 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           3m32s                  node-controller  Node functional-300067 event: Registered Node functional-300067 in Controller
	
	
	==> dmesg <==
	[Dec16 11:28] 9pnet: p9_fd_create_tcp (1165582): problem connecting socket to 192.168.49.1
	[ +14.961683] FS-Cache: Duplicate cookie detected
	[  +0.000724] FS-Cache: O-cookie c=0000005a [p=00000002 fl=222 nc=0 na=1]
	[  +0.001017] FS-Cache: O-cookie d=00000000592ac1f0{9P.session} n=000000001003b1eb
	[  +0.001110] FS-Cache: O-key=[10] '34333032323630353435'
	[  +0.000781] FS-Cache: N-cookie c=0000005b [p=00000002 fl=2 nc=0 na=1]
	[  +0.000967] FS-Cache: N-cookie d=00000000592ac1f0{9P.session} n=000000004a329d52
	[  +0.001137] FS-Cache: N-key=[10] '34333032323630353435'
	
	
	==> etcd [bbbb5a0ce5c9fa4171b8cbc1476d79cc65106b2a20a2d32a37247ace1c0275fa] <==
	{"level":"info","ts":"2024-12-16T11:26:33.110946Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc switched to configuration voters=(12593026477526642892)"}
	{"level":"info","ts":"2024-12-16T11:26:33.114843Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","added-peer-id":"aec36adc501070cc","added-peer-peer-urls":["https://192.168.49.2:2380"]}
	{"level":"info","ts":"2024-12-16T11:26:33.126223Z","caller":"embed/etcd.go:728","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-12-16T11:26:33.135365Z","caller":"embed/etcd.go:279","msg":"now serving peer/client/metrics","local-member-id":"aec36adc501070cc","initial-advertise-peer-urls":["https://192.168.49.2:2380"],"listen-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.49.2:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-12-16T11:26:33.135479Z","caller":"embed/etcd.go:870","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-12-16T11:26:33.135571Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"fa54960ea34d58be","local-member-id":"aec36adc501070cc","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:26:33.135636Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-12-16T11:26:33.137794Z","caller":"embed/etcd.go:599","msg":"serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T11:26:33.152826Z","caller":"embed/etcd.go:571","msg":"cmux::serve","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T11:26:34.364804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc is starting a new election at term 3"}
	{"level":"info","ts":"2024-12-16T11:26:34.364919Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 3"}
	{"level":"info","ts":"2024-12-16T11:26:34.364971Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-16T11:26:34.365011Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 4"}
	{"level":"info","ts":"2024-12-16T11:26:34.365043Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-16T11:26:34.365079Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 4"}
	{"level":"info","ts":"2024-12-16T11:26:34.365116Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 4"}
	{"level":"info","ts":"2024-12-16T11:26:34.373031Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-300067 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T11:26:34.373227Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:26:34.374157Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T11:26:34.375171Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T11:26:34.380784Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:26:34.392998Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T11:26:34.393070Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T11:26:34.393704Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T11:26:34.394603Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	
	
	==> etcd [f6db253810c713508777b6a2b9f6b4ac1817a6a280858114a611a4e59095b420] <==
	{"level":"info","ts":"2024-12-16T11:25:51.475284Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became pre-candidate at term 2"}
	{"level":"info","ts":"2024-12-16T11:25:51.475368Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgPreVoteResp from aec36adc501070cc at term 2"}
	{"level":"info","ts":"2024-12-16T11:25:51.475409Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became candidate at term 3"}
	{"level":"info","ts":"2024-12-16T11:25:51.475444Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc received MsgVoteResp from aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-16T11:25:51.475485Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"aec36adc501070cc became leader at term 3"}
	{"level":"info","ts":"2024-12-16T11:25:51.475518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: aec36adc501070cc elected leader aec36adc501070cc at term 3"}
	{"level":"info","ts":"2024-12-16T11:25:51.484977Z","caller":"etcdserver/server.go:2118","msg":"published local member to cluster through raft","local-member-id":"aec36adc501070cc","local-member-attributes":"{Name:functional-300067 ClientURLs:[https://192.168.49.2:2379]}","request-path":"/0/members/aec36adc501070cc/attributes","cluster-id":"fa54960ea34d58be","publish-timeout":"7s"}
	{"level":"info","ts":"2024-12-16T11:25:51.485214Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:25:51.485516Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-12-16T11:25:51.486272Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T11:25:51.487191Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-12-16T11:25:51.487394Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-12-16T11:25:51.487440Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-12-16T11:25:51.488153Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-16T11:25:51.489052Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-16T11:26:22.046150Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-12-16T11:26:22.046204Z","caller":"embed/etcd.go:377","msg":"closing etcd server","name":"functional-300067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	{"level":"warn","ts":"2024-12-16T11:26:22.046283Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T11:26:22.046356Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T11:26:22.096180Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-12-16T11:26:22.096296Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.49.2:2379: use of closed network connection"}
	{"level":"info","ts":"2024-12-16T11:26:22.096381Z","caller":"etcdserver/server.go:1521","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"aec36adc501070cc","current-leader-member-id":"aec36adc501070cc"}
	{"level":"info","ts":"2024-12-16T11:26:22.099493Z","caller":"embed/etcd.go:581","msg":"stopping serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T11:26:22.099667Z","caller":"embed/etcd.go:586","msg":"stopped serving peer traffic","address":"192.168.49.2:2380"}
	{"level":"info","ts":"2024-12-16T11:26:22.099705Z","caller":"embed/etcd.go:379","msg":"closed etcd server","name":"functional-300067","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.49.2:2380"],"advertise-client-urls":["https://192.168.49.2:2379"]}
	
	
	==> kernel <==
	 11:30:12 up  8:12,  0 users,  load average: 0.72, 0.98, 1.49
	Linux functional-300067 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [4ab8b00e38ec42176b49f0d3fd54414274bb644a06766a32eb77678ebe901762] <==
	I1216 11:28:08.148877       1 main.go:301] handling current node
	I1216 11:28:18.152845       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:28:18.152885       1 main.go:301] handling current node
	I1216 11:28:28.145797       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:28:28.145840       1 main.go:301] handling current node
	I1216 11:28:38.145991       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:28:38.146024       1 main.go:301] handling current node
	I1216 11:28:48.152842       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:28:48.152883       1 main.go:301] handling current node
	I1216 11:28:58.145743       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:28:58.145854       1 main.go:301] handling current node
	I1216 11:29:08.154407       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:08.154444       1 main.go:301] handling current node
	I1216 11:29:18.146346       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:18.146384       1 main.go:301] handling current node
	I1216 11:29:28.145753       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:28.145870       1 main.go:301] handling current node
	I1216 11:29:38.146675       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:38.146820       1 main.go:301] handling current node
	I1216 11:29:48.147583       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:48.147701       1 main.go:301] handling current node
	I1216 11:29:58.148854       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:29:58.148983       1 main.go:301] handling current node
	I1216 11:30:08.152825       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:30:08.152859       1 main.go:301] handling current node
	
	
	==> kindnet [d6e57ed5540a03128a428bc1b271201a7a4ec17d07c5bc290217139abf2bf366] <==
	I1216 11:25:50.136552       1 main.go:109] connected to apiserver: https://10.96.0.1:443
	I1216 11:25:50.136793       1 main.go:139] hostIP = 192.168.49.2
	podIP = 192.168.49.2
	I1216 11:25:50.136920       1 main.go:148] setting mtu 1500 for CNI 
	I1216 11:25:50.136939       1 main.go:178] kindnetd IP family: "ipv4"
	I1216 11:25:50.136953       1 main.go:182] noMask IPv4 subnets: [10.244.0.0/16]
	I1216 11:25:50.425538       1 controller.go:361] Starting controller kube-network-policies
	I1216 11:25:50.425560       1 controller.go:365] Waiting for informer caches to sync
	I1216 11:25:50.425567       1 shared_informer.go:313] Waiting for caches to sync for kube-network-policies
	I1216 11:25:54.426540       1 shared_informer.go:320] Caches are synced for kube-network-policies
	I1216 11:25:54.426678       1 metrics.go:61] Registering metrics
	I1216 11:25:54.426775       1 controller.go:401] Syncing nftables rules
	I1216 11:26:00.425912       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:26:00.425956       1 main.go:301] handling current node
	I1216 11:26:10.425741       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:26:10.425777       1 main.go:301] handling current node
	I1216 11:26:20.425045       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1216 11:26:20.425087       1 main.go:301] handling current node
	
	
	==> kube-apiserver [0498cc9d8280774b642e1214bfa19fcd09a010dce91ad1ec6583d658dc7cc072] <==
	I1216 11:26:37.013972       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I1216 11:26:37.014028       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I1216 11:26:37.014572       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I1216 11:26:37.014612       1 aggregator.go:171] initial CRD sync complete...
	I1216 11:26:37.014619       1 autoregister_controller.go:144] Starting autoregister controller
	I1216 11:26:37.014625       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I1216 11:26:37.014631       1 cache.go:39] Caches are synced for autoregister controller
	I1216 11:26:37.019074       1 shared_informer.go:320] Caches are synced for node_authorizer
	E1216 11:26:37.025934       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I1216 11:26:37.781788       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I1216 11:26:38.852742       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I1216 11:26:38.978343       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I1216 11:26:38.991239       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I1216 11:26:39.087170       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I1216 11:26:39.096596       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I1216 11:26:40.297550       1 controller.go:615] quota admission added evaluator for: endpoints
	I1216 11:26:40.496837       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I1216 11:26:58.681264       1 alloc.go:330] "allocated clusterIPs" service="default/invalid-svc" clusterIPs={"IPv4":"10.111.71.26"}
	I1216 11:27:05.048855       1 alloc.go:330] "allocated clusterIPs" service="default/nginx-svc" clusterIPs={"IPv4":"10.109.36.251"}
	I1216 11:27:15.490560       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I1216 11:27:15.622881       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node-connect" clusterIPs={"IPv4":"10.100.187.141"}
	I1216 11:27:52.221546       1 alloc.go:330] "allocated clusterIPs" service="default/hello-node" clusterIPs={"IPv4":"10.106.171.85"}
	I1216 11:28:52.342020       1 controller.go:615] quota admission added evaluator for: namespaces
	I1216 11:28:52.645160       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/kubernetes-dashboard" clusterIPs={"IPv4":"10.110.252.225"}
	I1216 11:28:52.669336       1 alloc.go:330] "allocated clusterIPs" service="kubernetes-dashboard/dashboard-metrics-scraper" clusterIPs={"IPv4":"10.105.102.13"}
	
	
	==> kube-controller-manager [8a9aec037c214b8b34b06e8986f6932482c7024a545841e32cfd44b77da4c99f] <==
	I1216 11:25:57.467144       1 shared_informer.go:320] Caches are synced for disruption
	I1216 11:25:57.467147       1 shared_informer.go:320] Caches are synced for PVC protection
	I1216 11:25:57.467167       1 shared_informer.go:320] Caches are synced for TTL
	I1216 11:25:57.467197       1 shared_informer.go:320] Caches are synced for attach detach
	I1216 11:25:57.468295       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I1216 11:25:57.468409       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7c65d6cfc9" duration="64.721µs"
	I1216 11:25:57.474768       1 shared_informer.go:320] Caches are synced for node
	I1216 11:25:57.474846       1 range_allocator.go:171] "Sending events to api server" logger="node-ipam-controller"
	I1216 11:25:57.474868       1 range_allocator.go:177] "Starting range CIDR allocator" logger="node-ipam-controller"
	I1216 11:25:57.474873       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I1216 11:25:57.474878       1 shared_informer.go:320] Caches are synced for cidrallocator
	I1216 11:25:57.474938       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	I1216 11:25:57.481664       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I1216 11:25:57.517245       1 shared_informer.go:320] Caches are synced for daemon sets
	I1216 11:25:57.517298       1 shared_informer.go:320] Caches are synced for taint
	I1216 11:25:57.517371       1 node_lifecycle_controller.go:1232] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I1216 11:25:57.517457       1 node_lifecycle_controller.go:884] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="functional-300067"
	I1216 11:25:57.517524       1 node_lifecycle_controller.go:1078] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I1216 11:25:57.530937       1 shared_informer.go:320] Caches are synced for persistent volume
	I1216 11:25:57.955476       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 11:25:57.969945       1 shared_informer.go:320] Caches are synced for garbage collector
	I1216 11:25:57.970045       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I1216 11:25:58.161785       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	I1216 11:26:08.288625       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	I1216 11:26:18.422506       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	
	
	==> kube-controller-manager [ad7af82987b307a5f3eb234acf4c0be442801d7e648ef3ccbbcd0bda914dd34c] <==
	E1216 11:28:52.470731       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.477612       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="5.803051ms"
	E1216 11:28:52.477665       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.479639       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="8.032155ms"
	E1216 11:28:52.479676       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.498406       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="15.068709ms"
	E1216 11:28:52.498519       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4\" failed with pods \"dashboard-metrics-scraper-c5db448b4-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.499248       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.789445ms"
	E1216 11:28:52.499344       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.508314       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="7.610873ms"
	E1216 11:28:52.508421       1 replica_set.go:560] "Unhandled Error" err="sync \"kubernetes-dashboard/kubernetes-dashboard-695b96c756\" failed with pods \"kubernetes-dashboard-695b96c756-\" is forbidden: error looking up service account kubernetes-dashboard/kubernetes-dashboard: serviceaccount \"kubernetes-dashboard\" not found" logger="UnhandledError"
	I1216 11:28:52.538344       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="17.384637ms"
	I1216 11:28:52.557491       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="18.977146ms"
	I1216 11:28:52.557683       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="72.671µs"
	I1216 11:28:52.582910       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="40.739009ms"
	I1216 11:28:52.585480       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="39.507µs"
	I1216 11:28:52.610498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="27.539405ms"
	I1216 11:28:52.610572       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="35.798µs"
	I1216 11:28:52.633249       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="45.251µs"
	I1216 11:29:23.842570       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="15.00298ms"
	I1216 11:29:23.842729       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/kubernetes-dashboard-695b96c756" duration="42.739µs"
	I1216 11:29:24.843479       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="14.635154ms"
	I1216 11:29:24.843563       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4" duration="39.104µs"
	I1216 11:29:40.726182       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	I1216 11:30:11.383531       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="functional-300067"
	
	
	==> kube-proxy [5ff19c26e51ebf8624152fa557bfec3ba291c09f506b7cda3dca20705a19ae8c] <==
	I1216 11:25:50.608034       1 server_linux.go:66] "Using iptables proxy"
	I1216 11:25:54.421278       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 11:25:54.421427       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 11:25:54.640223       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 11:25:54.640380       1 server_linux.go:169] "Using iptables Proxier"
	I1216 11:25:54.655906       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 11:25:54.656269       1 server.go:483] "Version info" version="v1.31.2"
	I1216 11:25:54.656293       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:25:54.658634       1 config.go:199] "Starting service config controller"
	I1216 11:25:54.658688       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 11:25:54.658768       1 config.go:105] "Starting endpoint slice config controller"
	I1216 11:25:54.658773       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 11:25:54.694439       1 config.go:328] "Starting node config controller"
	I1216 11:25:54.694474       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 11:25:54.760895       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 11:25:54.761018       1 shared_informer.go:320] Caches are synced for service config
	I1216 11:25:54.804576       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [a2904690bcd6c70a142dca969b82d3050298c68f09bce06c9ab954f156e0b6da] <==
	I1216 11:26:37.974148       1 server_linux.go:66] "Using iptables proxy"
	I1216 11:26:38.084412       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1216 11:26:38.084665       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1216 11:26:38.119472       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1216 11:26:38.120051       1 server_linux.go:169] "Using iptables Proxier"
	I1216 11:26:38.123331       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1216 11:26:38.123701       1 server.go:483] "Version info" version="v1.31.2"
	I1216 11:26:38.123763       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:26:38.135837       1 config.go:199] "Starting service config controller"
	I1216 11:26:38.135871       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1216 11:26:38.135892       1 config.go:105] "Starting endpoint slice config controller"
	I1216 11:26:38.135897       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1216 11:26:38.135959       1 config.go:328] "Starting node config controller"
	I1216 11:26:38.135993       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1216 11:26:38.236811       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I1216 11:26:38.240638       1 shared_informer.go:320] Caches are synced for node config
	I1216 11:26:38.240803       1 shared_informer.go:320] Caches are synced for service config
	
	
	==> kube-scheduler [5d8ea4ba78194cb4b9fdab5639644a9c4e1fc9d5a42464d77d0f86d278530282] <==
	I1216 11:25:52.676465       1 serving.go:386] Generated self-signed cert in-memory
	W1216 11:25:54.225163       1 requestheader_controller.go:196] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W1216 11:25:54.225316       1 authentication.go:370] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W1216 11:25:54.225354       1 authentication.go:371] Continuing without authentication configuration. This may treat all requests as anonymous.
	W1216 11:25:54.225400       1 authentication.go:372] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I1216 11:25:54.415447       1 server.go:167] "Starting Kubernetes Scheduler" version="v1.31.2"
	I1216 11:25:54.415557       1 server.go:169] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1216 11:25:54.418173       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I1216 11:25:54.418445       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I1216 11:25:54.419778       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 11:25:54.418465       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I1216 11:25:54.520886       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I1216 11:26:22.051015       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	I1216 11:26:22.057425       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E1216 11:26:22.057706       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [aa6808392bf75c72fc0ee5eeef5a54b260f2643600e552416e06c460e7ac090b] <==
	W1216 11:26:36.946334       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1216 11:26:36.946388       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946503       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1216 11:26:36.947235       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946641       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E1216 11:26:36.947398       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User \"system:kube-scheduler\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946705       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1216 11:26:36.947497       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946760       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1216 11:26:36.947595       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946817       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1216 11:26:36.947689       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946875       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E1216 11:26:36.947782       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946930       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1216 11:26:36.947874       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.946977       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E1216 11:26:36.947961       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.947013       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1216 11:26:36.948053       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.947045       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E1216 11:26:36.948139       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W1216 11:26:36.947169       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1216 11:26:36.948235       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1216 11:26:38.207432       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 16 11:28:52 functional-300067 kubelet[4475]: I1216 11:28:52.738115    4475 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q8x79\" (UniqueName: \"kubernetes.io/projected/fac27551-1661-4baa-97b4-ffa2aa1f750f-kube-api-access-q8x79\") pod \"dashboard-metrics-scraper-c5db448b4-fthh6\" (UID: \"fac27551-1661-4baa-97b4-ffa2aa1f750f\") " pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-fthh6"
	Dec 16 11:29:02 functional-300067 kubelet[4475]: E1216 11:29:02.433813    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348542433616087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:02 functional-300067 kubelet[4475]: E1216 11:29:02.433850    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348542433616087,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:12 functional-300067 kubelet[4475]: E1216 11:29:12.435701    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348552435469608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:12 functional-300067 kubelet[4475]: E1216 11:29:12.436174    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348552435469608,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:18 functional-300067 kubelet[4475]: E1216 11:29:18.665319    4475 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 16 11:29:18 functional-300067 kubelet[4475]: E1216 11:29:18.665830    4475 kuberuntime_image.go:55] "Failed to pull image" err="reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" image="docker.io/nginx:latest"
	Dec 16 11:29:18 functional-300067 kubelet[4475]: E1216 11:29:18.666242    4475 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:myfrontend,Image:docker.io/nginx,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:mypd,ReadOnly:false,MountPath:/tmp/mount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s5lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:nil,Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},Re
startPolicy:nil,} start failed in pod sp-pod_default(8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9): ErrImagePull: reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit" logger="UnhandledError"
	Dec 16 11:29:18 functional-300067 kubelet[4475]: E1216 11:29:18.670278    4475 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ErrImagePull: \"reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit\"" pod="default/sp-pod" podUID="8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9"
	Dec 16 11:29:22 functional-300067 kubelet[4475]: E1216 11:29:22.437678    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348562437474307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:22 functional-300067 kubelet[4475]: E1216 11:29:22.437714    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348562437474307,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:221410,},InodesUsed:&UInt64Value{Value:100,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:24 functional-300067 kubelet[4475]: I1216 11:29:24.827742    4475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/kubernetes-dashboard-695b96c756-tn6sz" podStartSLOduration=2.927567863 podStartE2EDuration="32.827721899s" podCreationTimestamp="2024-12-16 11:28:52 +0000 UTC" firstStartedPulling="2024-12-16 11:28:52.905367051 +0000 UTC m=+140.708418836" lastFinishedPulling="2024-12-16 11:29:22.805521096 +0000 UTC m=+170.608572872" observedRunningTime="2024-12-16 11:29:23.82603054 +0000 UTC m=+171.629082333" watchObservedRunningTime="2024-12-16 11:29:24.827721899 +0000 UTC m=+172.630773676"
	Dec 16 11:29:24 functional-300067 kubelet[4475]: I1216 11:29:24.828174    4475 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kubernetes-dashboard/dashboard-metrics-scraper-c5db448b4-fthh6" podStartSLOduration=1.384745658 podStartE2EDuration="32.828166327s" podCreationTimestamp="2024-12-16 11:28:52 +0000 UTC" firstStartedPulling="2024-12-16 11:28:52.924067035 +0000 UTC m=+140.727118812" lastFinishedPulling="2024-12-16 11:29:24.367487696 +0000 UTC m=+172.170539481" observedRunningTime="2024-12-16 11:29:24.826698106 +0000 UTC m=+172.629749891" watchObservedRunningTime="2024-12-16 11:29:24.828166327 +0000 UTC m=+172.631218103"
	Dec 16 11:29:29 functional-300067 kubelet[4475]: E1216 11:29:29.382554    4475 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9"
	Dec 16 11:29:32 functional-300067 kubelet[4475]: E1216 11:29:32.439811    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348572439577807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242780,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:32 functional-300067 kubelet[4475]: E1216 11:29:32.439853    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348572439577807,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:242780,},InodesUsed:&UInt64Value{Value:112,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:42 functional-300067 kubelet[4475]: E1216 11:29:42.441504    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348582441274178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250554,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:42 functional-300067 kubelet[4475]: E1216 11:29:42.441540    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348582441274178,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:250554,},InodesUsed:&UInt64Value{Value:117,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:44 functional-300067 kubelet[4475]: E1216 11:29:44.383594    4475 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"myfrontend\" with ImagePullBackOff: \"Back-off pulling image \\\"docker.io/nginx\\\"\"" pod="default/sp-pod" podUID="8b29eadf-2d11-44e1-8ab8-9ad657c1b1f9"
	Dec 16 11:29:52 functional-300067 kubelet[4475]: E1216 11:29:52.443913    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348592443700259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:29:52 functional-300067 kubelet[4475]: E1216 11:29:52.443953    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348592443700259,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:30:02 functional-300067 kubelet[4475]: E1216 11:30:02.446527    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348602446223120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:30:02 functional-300067 kubelet[4475]: E1216 11:30:02.446565    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348602446223120,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:30:12 functional-300067 kubelet[4475]: E1216 11:30:12.448476    4475 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348612448266907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 16 11:30:12 functional-300067 kubelet[4475]: E1216 11:30:12.448511    4475 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734348612448266907,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:275418,},InodesUsed:&UInt64Value{Value:133,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> kubernetes-dashboard [4efc16c1b5d3e51572184ef2451e01d7b0b2b0e39c18858eca3b0b5683cdcfb9] <==
	2024/12/16 11:29:22 Using namespace: kubernetes-dashboard
	2024/12/16 11:29:22 Using in-cluster config to connect to apiserver
	2024/12/16 11:29:22 Using secret token for csrf signing
	2024/12/16 11:29:22 Initializing csrf token from kubernetes-dashboard-csrf secret
	2024/12/16 11:29:22 Empty token. Generating and storing in a secret kubernetes-dashboard-csrf
	2024/12/16 11:29:22 Successful initial request to the apiserver, version: v1.31.2
	2024/12/16 11:29:22 Generating JWE encryption key
	2024/12/16 11:29:22 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting
	2024/12/16 11:29:22 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard
	2024/12/16 11:29:23 Initializing JWE encryption key from synchronized object
	2024/12/16 11:29:23 Creating in-cluster Sidecar client
	2024/12/16 11:29:23 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds.
	2024/12/16 11:29:23 Serving insecurely on HTTP port: 9090
	2024/12/16 11:29:53 Successful request to sidecar
	2024/12/16 11:29:22 Starting overwatch
	
	
	==> storage-provisioner [a213dcf478b1cda07061d25dfcc66f8a2a2d1c2534aae6b12bab8aed9f917ce1] <==
	I1216 11:26:37.809728       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 11:26:37.856945       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 11:26:37.857069       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 11:26:55.270463       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 11:26:55.270630       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-300067_457de88a-62f8-4c7d-8a2f-313e2c5990db!
	I1216 11:26:55.271235       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96a8b068-063d-4620-86aa-b0104dbd7043", APIVersion:"v1", ResourceVersion:"605", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-300067_457de88a-62f8-4c7d-8a2f-313e2c5990db became leader
	I1216 11:26:55.371432       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-300067_457de88a-62f8-4c7d-8a2f-313e2c5990db!
	I1216 11:27:10.535175       1 controller.go:1332] provision "default/myclaim" class "standard": started
	I1216 11:27:10.535365       1 storage_provisioner.go:61] Provisioning volume {&StorageClass{ObjectMeta:{standard    3581c9ee-85ea-4e34-93f3-4fd8af7219bd 344 0 2024-12-16 11:25:23 +0000 UTC <nil> <nil> map[addonmanager.kubernetes.io/mode:EnsureExists] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"},"labels":{"addonmanager.kubernetes.io/mode":"EnsureExists"},"name":"standard"},"provisioner":"k8s.io/minikube-hostpath"}
	 storageclass.kubernetes.io/is-default-class:true] [] []  [{kubectl-client-side-apply Update storage.k8s.io/v1 2024-12-16 11:25:23 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{},"f:storageclass.kubernetes.io/is-default-class":{}},"f:labels":{".":{},"f:addonmanager.kubernetes.io/mode":{}}},"f:provisioner":{},"f:reclaimPolicy":{},"f:volumeBindingMode":{}}}]},Provisioner:k8s.io/minikube-hostpath,Parameters:map[string]string{},ReclaimPolicy:*Delete,MountOptions:[],AllowVolumeExpansion:nil,VolumeBindingMode:*Immediate,AllowedTopologies:[]TopologySelectorTerm{},} pvc-4d20e06c-947c-4bab-94b4-5c306a6e1b6a &PersistentVolumeClaim{ObjectMeta:{myclaim  default  4d20e06c-947c-4bab-94b4-5c306a6e1b6a 667 0 2024-12-16 11:27:10 +0000 UTC <nil> <nil> map[] map[kubectl.kubernetes.io/last-applied-configuration:{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"myclaim","namespace":"default"},"spec":{"accessModes":["Rea
dWriteOnce"],"resources":{"requests":{"storage":"500Mi"}},"volumeMode":"Filesystem"}}
	 volume.beta.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath volume.kubernetes.io/storage-provisioner:k8s.io/minikube-hostpath] [] [kubernetes.io/pvc-protection]  [{kube-controller-manager Update v1 2024-12-16 11:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{"f:volume.beta.kubernetes.io/storage-provisioner":{},"f:volume.kubernetes.io/storage-provisioner":{}}}}} {kubectl-client-side-apply Update v1 2024-12-16 11:27:10 +0000 UTC FieldsV1 {"f:metadata":{"f:annotations":{".":{},"f:kubectl.kubernetes.io/last-applied-configuration":{}}},"f:spec":{"f:accessModes":{},"f:resources":{"f:requests":{".":{},"f:storage":{}}},"f:volumeMode":{}}}}]},Spec:PersistentVolumeClaimSpec{AccessModes:[ReadWriteOnce],Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{storage: {{524288000 0} {<nil>} 500Mi BinarySI},},},VolumeName:,Selector:nil,StorageClassName:*standard,VolumeMode:*Filesystem,DataSource:nil,},Status:PersistentVolumeClaimStatus{Phase:Pending,AccessModes:[],Capacity:
ResourceList{},Conditions:[]PersistentVolumeClaimCondition{},},} nil} to /tmp/hostpath-provisioner/default/myclaim
	I1216 11:27:10.538590       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4d20e06c-947c-4bab-94b4-5c306a6e1b6a", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'Provisioning' External provisioner is provisioning volume for claim "default/myclaim"
	I1216 11:27:10.538889       1 controller.go:1439] provision "default/myclaim" class "standard": volume "pvc-4d20e06c-947c-4bab-94b4-5c306a6e1b6a" provisioned
	I1216 11:27:10.538961       1 controller.go:1456] provision "default/myclaim" class "standard": succeeded
	I1216 11:27:10.538990       1 volume_store.go:212] Trying to save persistentvolume "pvc-4d20e06c-947c-4bab-94b4-5c306a6e1b6a"
	I1216 11:27:10.572159       1 volume_store.go:219] persistentvolume "pvc-4d20e06c-947c-4bab-94b4-5c306a6e1b6a" saved
	I1216 11:27:10.572325       1 event.go:282] Event(v1.ObjectReference{Kind:"PersistentVolumeClaim", Namespace:"default", Name:"myclaim", UID:"4d20e06c-947c-4bab-94b4-5c306a6e1b6a", APIVersion:"v1", ResourceVersion:"667", FieldPath:""}): type: 'Normal' reason: 'ProvisioningSucceeded' Successfully provisioned volume pvc-4d20e06c-947c-4bab-94b4-5c306a6e1b6a
	
	
	==> storage-provisioner [c3759f2b366b3add939630ca18cd37cef21e15c0b6b76ae9c7c1fc15332d60db] <==
	I1216 11:25:50.590451       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1216 11:25:54.459583       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1216 11:25:54.459637       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1216 11:26:11.904644       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1216 11:26:11.904900       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_functional-300067_b9d6f896-8f42-4a7b-9903-885b9a5e4933!
	I1216 11:26:11.904694       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"96a8b068-063d-4620-86aa-b0104dbd7043", APIVersion:"v1", ResourceVersion:"501", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' functional-300067_b9d6f896-8f42-4a7b-9903-885b9a5e4933 became leader
	I1216 11:26:12.006063       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_functional-300067_b9d6f896-8f42-4a7b-9903-885b9a5e4933!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p functional-300067 -n functional-300067
helpers_test.go:261: (dbg) Run:  kubectl --context functional-300067 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-mount sp-pod
helpers_test.go:274: ======> post-mortem[TestFunctional/parallel/PersistentVolumeClaim]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context functional-300067 describe pod busybox-mount sp-pod
helpers_test.go:282: (dbg) kubectl --context functional-300067 describe pod busybox-mount sp-pod:

                                                
                                                
-- stdout --
	Name:             busybox-mount
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-300067/192.168.49.2
	Start Time:       Mon, 16 Dec 2024 11:28:03 +0000
	Labels:           integration-test=busybox-mount
	Annotations:      <none>
	Status:           Succeeded
	IP:               10.244.0.8
	IPs:
	  IP:  10.244.0.8
	Containers:
	  mount-munger:
	    Container ID:  cri-o://a5a841013a39441f2686e68218589d6d85743329c920427a24e3ca0ed10389d6
	    Image:         gcr.io/k8s-minikube/busybox:1.28.4-glibc
	    Image ID:      gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
	    Port:          <none>
	    Host Port:     <none>
	    Command:
	      /bin/sh
	      -c
	      --
	    Args:
	      cat /mount-9p/created-by-test; echo test > /mount-9p/created-by-pod; rm /mount-9p/created-by-test-removed-by-pod; echo test > /mount-9p/created-by-pod-removed-by-test date >> /mount-9p/pod-dates
	    State:          Terminated
	      Reason:       Completed
	      Exit Code:    0
	      Started:      Mon, 16 Dec 2024 11:28:29 +0000
	      Finished:     Mon, 16 Dec 2024 11:28:29 +0000
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /mount-9p from test-volume (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-9d2p9 (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   False 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  test-volume:
	    Type:          HostPath (bare host directory volume)
	    Path:          /mount-9p
	    HostPathType:  
	  kube-api-access-9d2p9:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type    Reason     Age    From               Message
	  ----    ------     ----   ----               -------
	  Normal  Scheduled  2m11s  default-scheduler  Successfully assigned default/busybox-mount to functional-300067
	  Normal  Pulling    2m10s  kubelet            Pulling image "gcr.io/k8s-minikube/busybox:1.28.4-glibc"
	  Normal  Pulled     105s   kubelet            Successfully pulled image "gcr.io/k8s-minikube/busybox:1.28.4-glibc" in 3.312s (25.007s including waiting). Image size: 3774172 bytes.
	  Normal  Created    105s   kubelet            Created container mount-munger
	  Normal  Started    105s   kubelet            Started container mount-munger
	
	
	Name:             sp-pod
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             functional-300067/192.168.49.2
	Start Time:       Mon, 16 Dec 2024 11:27:10 +0000
	Labels:           test=storage-provisioner
	Annotations:      <none>
	Status:           Pending
	IP:               10.244.0.5
	IPs:
	  IP:  10.244.0.5
	Containers:
	  myfrontend:
	    Container ID:   
	    Image:          docker.io/nginx
	    Image ID:       
	    Port:           <none>
	    Host Port:      <none>
	    State:          Waiting
	      Reason:       ImagePullBackOff
	    Ready:          False
	    Restart Count:  0
	    Environment:    <none>
	    Mounts:
	      /tmp/mount from mypd (rw)
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-2s5lh (ro)
	Conditions:
	  Type                        Status
	  PodReadyToStartContainers   True 
	  Initialized                 True 
	  Ready                       False 
	  ContainersReady             False 
	  PodScheduled                True 
	Volumes:
	  mypd:
	    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
	    ClaimName:  myclaim
	    ReadOnly:   false
	  kube-api-access-2s5lh:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason     Age                  From               Message
	  ----     ------     ----                 ----               -------
	  Normal   Scheduled  3m4s                 default-scheduler  Successfully assigned default/sp-pod to functional-300067
	  Warning  Failed     2m32s                kubelet            Failed to pull image "docker.io/nginx": determining manifest MIME type for docker://nginx:latest: reading manifest sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     109s                 kubelet            Failed to pull image "docker.io/nginx": loading manifest for target platform: reading manifest sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2 in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Warning  Failed     56s (x3 over 2m32s)  kubelet            Error: ErrImagePull
	  Warning  Failed     56s                  kubelet            Failed to pull image "docker.io/nginx": reading manifest latest in docker.io/library/nginx: toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit
	  Normal   BackOff    30s (x4 over 2m32s)  kubelet            Back-off pulling image "docker.io/nginx"
	  Warning  Failed     30s (x4 over 2m32s)  kubelet            Error: ImagePullBackOff
	  Normal   Pulling    15s (x4 over 3m3s)   kubelet            Pulling image "docker.io/nginx"

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestFunctional/parallel/PersistentVolumeClaim FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestFunctional/parallel/PersistentVolumeClaim (188.78s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (352.343583ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:33.125001 1137938 retry.go:31] will retry after 404.617033ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (261.76482ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:33.791603 1137938 retry.go:31] will retry after 863.899397ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (282.72201ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:34.938633 1137938 retry.go:31] will retry after 803.094218ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (253.937394ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:35.996845 1137938 retry.go:31] will retry after 2.258098981s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (264.697414ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:38.520004 1137938 retry.go:31] will retry after 2.236750342s: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (258.155987ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:41.015300 1137938 retry.go:31] will retry after 5.593652632s: exit status 1
E1216 11:28:42.770851 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (263.905674ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:253: /mount-9p did not appear within 14.100643849s: exit status 1
functional_test_mount_test.go:220: "TestFunctional/parallel/MountCmd/specific-port" failed, getting debug info...
functional_test_mount_test.go:221: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates"
functional_test_mount_test.go:221: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates": exit status 1 (264.334742ms)

                                                
                                                
-- stdout --
	total 8
	drwxr-xr-x 2 root root 4096 Dec 16 11:28 .
	drwxr-xr-x 1 root root 4096 Dec 16 11:28 ..
	cat: /mount-9p/pod-dates: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:223: debugging command "out/minikube-linux-arm64 -p functional-300067 ssh \"mount | grep 9p; ls -la /mount-9p; cat /mount-9p/pod-dates\"" failed : exit status 1
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "sudo umount -f /mount-9p": exit status 1 (269.045866ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-300067 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001:/mount-9p --alsologtostderr -v=1 --port 46464] stdout:
* Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001 into VM as /mount-9p ...
- Mount type:   9p
- User ID:      docker
- Group ID:     docker
- Version:      9p2000.L
- Message Size: 262144
- Options:      map[]
- Bind Address: 192.168.49.1:46464
* Userspace file server: ufs starting
* Userspace file server is shutdown

                                                
                                                

                                                
                                                
functional_test_mount_test.go:234: (dbg) [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001:/mount-9p --alsologtostderr -v=1 --port 46464] stderr:
I1216 11:28:32.829490 1165449 out.go:345] Setting OutFile to fd 1 ...
I1216 11:28:32.829648 1165449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:28:32.829654 1165449 out.go:358] Setting ErrFile to fd 2...
I1216 11:28:32.829658 1165449 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:28:32.830144 1165449 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:28:32.830452 1165449 mustload.go:65] Loading cluster: functional-300067
I1216 11:28:32.830839 1165449 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:28:32.831302 1165449 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:28:32.862904 1165449 host.go:66] Checking if "functional-300067" exists ...
I1216 11:28:32.863230 1165449 cli_runner.go:164] Run: docker system info --format "{{json .}}"
I1216 11:28:32.936161 1165449 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-12-16 11:28:32.922931752 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:aa
rch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge-
nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
I1216 11:28:32.936321 1165449 cli_runner.go:164] Run: docker network inspect functional-300067 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 11:28:32.970091 1165449 out.go:177] * Mounting host path /tmp/TestFunctionalparallelMountCmdspecific-port2251487725/001 into VM as /mount-9p ...
I1216 11:28:32.975018 1165449 out.go:177]   - Mount type:   9p
I1216 11:28:32.977941 1165449 out.go:177]   - User ID:      docker
I1216 11:28:32.980876 1165449 out.go:177]   - Group ID:     docker
I1216 11:28:32.983768 1165449 out.go:177]   - Version:      9p2000.L
I1216 11:28:32.986629 1165449 out.go:177]   - Message Size: 262144
I1216 11:28:32.989518 1165449 out.go:177]   - Options:      map[]
I1216 11:28:32.992436 1165449 out.go:177]   - Bind Address: 192.168.49.1:46464
I1216 11:28:32.995240 1165449 out.go:177] * Userspace file server: 
I1216 11:28:32.996245 1165449 ssh_runner.go:195] Run: /bin/bash -c "[ "x$(findmnt -T /mount-9p | grep /mount-9p)" != "x" ] && sudo umount -f /mount-9p || echo "
I1216 11:28:32.998632 1165449 main.go:125] stdlog: ufs.go:27 listen tcp 192.168.49.1:46464: bind: address already in use
I1216 11:28:33.000198 1165449 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:28:33.001437 1165449 out.go:177] * Userspace file server is shutdown
I1216 11:28:33.040186 1165449 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:28:33.140725 1165449 mount.go:180] unmount for /mount-9p ran successfully
I1216 11:28:33.140806 1165449 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /mount-9p"
I1216 11:28:33.150613 1165449 ssh_runner.go:195] Run: /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p"
I1216 11:28:33.168060 1165449 out.go:201] 
W1216 11:28:33.171136 1165449 out.go:270] X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount(2) system call failed: Connection refused.

                                                
                                                
X Exiting due to GUEST_MOUNT: mount failed: mount with cmd /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p" : /bin/bash -c "sudo mount -t 9p -o dfltgid=$(grep ^docker: /etc/group | cut -d: -f3),dfltuid=$(id -u docker),msize=262144,port=46464,trans=tcp,version=9p2000.L 192.168.49.1 /mount-9p": Process exited with status 32
stdout:

                                                
                                                
stderr:
mount: /mount-9p: mount(2) system call failed: Connection refused.

                                                
                                                
W1216 11:28:33.171166 1165449 out.go:270] * 
* 
W1216 11:28:33.177864 1165449 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_108e3e029e1f2becb49871aa8c52e3a1d85a7cb0_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│                                                                                             │
│    * If the above advice does not help, please let us know:                                 │
│      https://github.com/kubernetes/minikube/issues/new/choose                               │
│                                                                                             │
│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
│    * Please also attach the following file to the GitHub issue:                             │
│    * - /tmp/minikube_mount_108e3e029e1f2becb49871aa8c52e3a1d85a7cb0_0.log                   │
│                                                                                             │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I1216 11:28:33.180892 1165449 out.go:201] 
--- FAIL: TestFunctional/parallel/MountCmd/specific-port (14.74s)

                                                
                                    

Test pass (295/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 6.55
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.1
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.16
12 TestDownloadOnly/v1.31.2/json-events 4.51
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.22
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.15
21 TestBinaryMirror 0.61
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.09
27 TestAddons/Setup 245.92
31 TestAddons/serial/GCPAuth/Namespaces 0.2
32 TestAddons/serial/GCPAuth/FakeCredentials 10.89
35 TestAddons/parallel/Registry 17.12
37 TestAddons/parallel/InspektorGadget 11.75
40 TestAddons/parallel/CSI 55.55
41 TestAddons/parallel/Headlamp 16.83
42 TestAddons/parallel/CloudSpanner 6.59
43 TestAddons/parallel/LocalPath 51.42
44 TestAddons/parallel/NvidiaDevicePlugin 6.52
45 TestAddons/parallel/Yakd 11.72
47 TestAddons/StoppedEnableDisable 12.15
48 TestCertOptions 34.78
49 TestCertExpiration 250.56
51 TestForceSystemdFlag 38.4
52 TestForceSystemdEnv 46.6
58 TestErrorSpam/setup 31.66
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.1
61 TestErrorSpam/pause 1.89
62 TestErrorSpam/unpause 1.87
63 TestErrorSpam/stop 1.53
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 50.73
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 32
70 TestFunctional/serial/KubeContext 0.05
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.38
75 TestFunctional/serial/CacheCmd/cache/add_local 1.4
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.33
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.17
80 TestFunctional/serial/CacheCmd/cache/delete 0.15
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.14
83 TestFunctional/serial/ExtraConfig 34.49
84 TestFunctional/serial/ComponentHealth 0.09
85 TestFunctional/serial/LogsCmd 1.75
86 TestFunctional/serial/LogsFileCmd 1.76
87 TestFunctional/serial/InvalidService 4.39
89 TestFunctional/parallel/ConfigCmd 0.49
90 TestFunctional/parallel/DashboardCmd 39.92
91 TestFunctional/parallel/DryRun 0.47
92 TestFunctional/parallel/InternationalLanguage 0.19
93 TestFunctional/parallel/StatusCmd 1.01
97 TestFunctional/parallel/ServiceCmdConnect 36.6
98 TestFunctional/parallel/AddonsCmd 0.15
101 TestFunctional/parallel/SSHCmd 0.7
102 TestFunctional/parallel/CpCmd 2.41
104 TestFunctional/parallel/FileSync 0.28
105 TestFunctional/parallel/CertSync 1.69
109 TestFunctional/parallel/NodeLabels 0.09
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
113 TestFunctional/parallel/License 0.25
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.59
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.46
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.09
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ServiceCmd/List 0.5
127 TestFunctional/parallel/ServiceCmd/JSONOutput 0.52
128 TestFunctional/parallel/ServiceCmd/HTTPS 0.37
129 TestFunctional/parallel/ServiceCmd/Format 0.37
130 TestFunctional/parallel/ServiceCmd/URL 0.58
131 TestFunctional/parallel/ProfileCmd/profile_not_create 0.43
132 TestFunctional/parallel/ProfileCmd/profile_list 0.44
133 TestFunctional/parallel/ProfileCmd/profile_json_output 0.45
134 TestFunctional/parallel/MountCmd/any-port 30.89
136 TestFunctional/parallel/MountCmd/VerifyCleanup 1.92
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1.11
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.23
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.23
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.23
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.26
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.78
144 TestFunctional/parallel/ImageCommands/Setup 0.68
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.35
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.95
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.54
149 TestFunctional/parallel/ImageCommands/ImageRemove 0.57
150 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.83
151 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.62
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.15
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.15
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.14
155 TestFunctional/delete_echo-server_images 0.05
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 176.3
162 TestMultiControlPlane/serial/DeployApp 9.45
163 TestMultiControlPlane/serial/PingHostFromPods 1.69
164 TestMultiControlPlane/serial/AddWorkerNode 36.87
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.03
167 TestMultiControlPlane/serial/CopyFile 18.94
168 TestMultiControlPlane/serial/StopSecondaryNode 12.69
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.8
170 TestMultiControlPlane/serial/RestartSecondaryNode 24.85
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.45
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 206.08
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.68
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
175 TestMultiControlPlane/serial/StopCluster 35.71
176 TestMultiControlPlane/serial/RestartCluster 121.05
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.75
178 TestMultiControlPlane/serial/AddSecondaryNode 72.34
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.99
183 TestJSONOutput/start/Command 51.9
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.73
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.69
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.87
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.26
208 TestKicCustomNetwork/create_custom_network 37.79
209 TestKicCustomNetwork/use_default_bridge_network 36.32
210 TestKicExistingNetwork 32.74
211 TestKicCustomSubnet 33.76
212 TestKicStaticIP 32.46
213 TestMainNoArgs 0.05
214 TestMinikubeProfile 70.28
217 TestMountStart/serial/StartWithMountFirst 9.99
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 7.43
220 TestMountStart/serial/VerifyMountSecond 0.26
221 TestMountStart/serial/DeleteFirst 1.61
222 TestMountStart/serial/VerifyMountPostDelete 0.27
223 TestMountStart/serial/Stop 1.2
224 TestMountStart/serial/RestartStopped 7.64
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 77.63
229 TestMultiNode/serial/DeployApp2Nodes 6.44
230 TestMultiNode/serial/PingHostFrom2Pods 1.05
231 TestMultiNode/serial/AddNode 27.97
232 TestMultiNode/serial/MultiNodeLabels 0.09
233 TestMultiNode/serial/ProfileList 0.7
234 TestMultiNode/serial/CopyFile 10.21
235 TestMultiNode/serial/StopNode 2.28
236 TestMultiNode/serial/StartAfterStop 9.98
237 TestMultiNode/serial/RestartKeepsNodes 105.97
238 TestMultiNode/serial/DeleteNode 5.64
239 TestMultiNode/serial/StopMultiNode 23.79
240 TestMultiNode/serial/RestartMultiNode 53.45
241 TestMultiNode/serial/ValidateNameConflict 33.91
246 TestPreload 126.15
248 TestScheduledStopUnix 107.89
251 TestInsufficientStorage 10.19
252 TestRunningBinaryUpgrade 107.54
254 TestKubernetesUpgrade 392.66
255 TestMissingContainerUpgrade 167.95
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 41.48
259 TestNoKubernetes/serial/StartWithStopK8s 8.42
260 TestNoKubernetes/serial/Start 7.62
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.27
262 TestNoKubernetes/serial/ProfileList 1.03
263 TestNoKubernetes/serial/Stop 1.27
264 TestNoKubernetes/serial/StartNoArgs 7.75
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.37
266 TestStoppedBinaryUpgrade/Setup 0.6
267 TestStoppedBinaryUpgrade/Upgrade 71.91
268 TestStoppedBinaryUpgrade/MinikubeLogs 1
277 TestPause/serial/Start 55.48
278 TestPause/serial/SecondStartNoReconfiguration 24.25
279 TestPause/serial/Pause 1.04
280 TestPause/serial/VerifyStatus 0.42
281 TestPause/serial/Unpause 1.07
282 TestPause/serial/PauseAgain 1.37
283 TestPause/serial/DeletePaused 3.03
284 TestPause/serial/VerifyDeletedResources 0.41
292 TestNetworkPlugins/group/false 5.52
297 TestStartStop/group/old-k8s-version/serial/FirstStart 166.01
298 TestStartStop/group/old-k8s-version/serial/DeployApp 9.8
300 TestStartStop/group/no-preload/serial/FirstStart 64.81
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.42
302 TestStartStop/group/old-k8s-version/serial/Stop 13.95
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.24
304 TestStartStop/group/old-k8s-version/serial/SecondStart 148.3
305 TestStartStop/group/no-preload/serial/DeployApp 10.42
306 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.7
307 TestStartStop/group/no-preload/serial/Stop 12.36
308 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.21
309 TestStartStop/group/no-preload/serial/SecondStart 266.53
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.09
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.26
313 TestStartStop/group/old-k8s-version/serial/Pause 3
315 TestStartStop/group/embed-certs/serial/FirstStart 55.04
316 TestStartStop/group/embed-certs/serial/DeployApp 10.35
317 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.19
318 TestStartStop/group/embed-certs/serial/Stop 11.94
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.21
320 TestStartStop/group/embed-certs/serial/SecondStart 265.68
321 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 6.1
323 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.24
324 TestStartStop/group/no-preload/serial/Pause 3.28
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.21
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 10.35
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.11
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.96
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.2
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 281.52
332 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
333 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.25
335 TestStartStop/group/embed-certs/serial/Pause 3.16
337 TestStartStop/group/newest-cni/serial/FirstStart 34.68
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.09
340 TestStartStop/group/newest-cni/serial/Stop 1.3
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.21
342 TestStartStop/group/newest-cni/serial/SecondStart 15.6
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.3
346 TestStartStop/group/newest-cni/serial/Pause 3.59
347 TestNetworkPlugins/group/auto/Start 54.33
348 TestNetworkPlugins/group/auto/KubeletFlags 0.3
349 TestNetworkPlugins/group/auto/NetCatPod 10.32
350 TestNetworkPlugins/group/auto/DNS 0.2
351 TestNetworkPlugins/group/auto/Localhost 0.18
352 TestNetworkPlugins/group/auto/HairPin 0.15
353 TestNetworkPlugins/group/kindnet/Start 56.22
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6.01
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.09
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.26
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 3.59
358 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
359 TestNetworkPlugins/group/calico/Start 64.54
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.53
361 TestNetworkPlugins/group/kindnet/NetCatPod 11.5
362 TestNetworkPlugins/group/kindnet/DNS 0.25
363 TestNetworkPlugins/group/kindnet/Localhost 0.2
364 TestNetworkPlugins/group/kindnet/HairPin 0.2
365 TestNetworkPlugins/group/custom-flannel/Start 67.41
366 TestNetworkPlugins/group/calico/ControllerPod 6.01
367 TestNetworkPlugins/group/calico/KubeletFlags 0.38
368 TestNetworkPlugins/group/calico/NetCatPod 12.32
369 TestNetworkPlugins/group/calico/DNS 0.21
370 TestNetworkPlugins/group/calico/Localhost 0.17
371 TestNetworkPlugins/group/calico/HairPin 0.18
372 TestNetworkPlugins/group/enable-default-cni/Start 49.77
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.37
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.31
375 TestNetworkPlugins/group/custom-flannel/DNS 0.23
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.2
378 TestNetworkPlugins/group/flannel/Start 58.15
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.36
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 165.34
381 TestNetworkPlugins/group/flannel/ControllerPod 6.01
382 TestNetworkPlugins/group/flannel/KubeletFlags 0.27
383 TestNetworkPlugins/group/flannel/NetCatPod 11.25
384 TestNetworkPlugins/group/flannel/DNS 0.17
385 TestNetworkPlugins/group/flannel/Localhost 0.14
386 TestNetworkPlugins/group/flannel/HairPin 0.15
387 TestNetworkPlugins/group/bridge/Start 80.44
388 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
389 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
390 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
392 TestNetworkPlugins/group/bridge/NetCatPod 12.29
393 TestNetworkPlugins/group/bridge/DNS 0.28
394 TestNetworkPlugins/group/bridge/Localhost 0.23
395 TestNetworkPlugins/group/bridge/HairPin 0.2
x
+
TestDownloadOnly/v1.20.0/json-events (6.55s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-333054 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-333054 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (6.554168509s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (6.55s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1216 11:13:07.076901 1137938 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1216 11:13:07.076981 1137938 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.1s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-333054
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-333054: exit status 85 (95.086373ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-333054 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |          |
	|         | -p download-only-333054        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:13:00
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:13:00.567235 1137944 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:13:00.567460 1137944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:00.567488 1137944 out.go:358] Setting ErrFile to fd 2...
	I1216 11:13:00.567506 1137944 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:00.567790 1137944 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	W1216 11:13:00.567978 1137944 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20107-1132549/.minikube/config/config.json: open /home/jenkins/minikube-integration/20107-1132549/.minikube/config/config.json: no such file or directory
	I1216 11:13:00.568457 1137944 out.go:352] Setting JSON to true
	I1216 11:13:00.569704 1137944 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28526,"bootTime":1734319055,"procs":167,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:13:00.569904 1137944 start.go:139] virtualization:  
	I1216 11:13:00.574123 1137944 out.go:97] [download-only-333054] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1216 11:13:00.574352 1137944 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball: no such file or directory
	I1216 11:13:00.574452 1137944 notify.go:220] Checking for updates...
	I1216 11:13:00.577393 1137944 out.go:169] MINIKUBE_LOCATION=20107
	I1216 11:13:00.580433 1137944 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:13:00.583415 1137944 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:13:00.586280 1137944 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:13:00.589255 1137944 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 11:13:00.595253 1137944 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 11:13:00.595528 1137944 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:13:00.619754 1137944 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:13:00.619875 1137944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:00.685724 1137944 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 11:13:00.676938246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:00.685847 1137944 docker.go:318] overlay module found
	I1216 11:13:00.688965 1137944 out.go:97] Using the docker driver based on user configuration
	I1216 11:13:00.689001 1137944 start.go:297] selected driver: docker
	I1216 11:13:00.689009 1137944 start.go:901] validating driver "docker" against <nil>
	I1216 11:13:00.689118 1137944 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:00.748977 1137944 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 11:13:00.739465147 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:00.749185 1137944 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:13:00.749468 1137944 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1216 11:13:00.749641 1137944 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:13:00.752894 1137944 out.go:169] Using Docker driver with root privileges
	I1216 11:13:00.755595 1137944 cni.go:84] Creating CNI manager for ""
	I1216 11:13:00.755662 1137944 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1216 11:13:00.755676 1137944 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1216 11:13:00.755764 1137944 start.go:340] cluster config:
	{Name:download-only-333054 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-333054 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:13:00.758807 1137944 out.go:97] Starting "download-only-333054" primary control-plane node in "download-only-333054" cluster
	I1216 11:13:00.758833 1137944 cache.go:121] Beginning downloading kic base image for docker with crio
	I1216 11:13:00.761884 1137944 out.go:97] Pulling base image v0.0.45-1733912881-20083 ...
	I1216 11:13:00.761929 1137944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:13:00.762101 1137944 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local docker daemon
	I1216 11:13:00.777867 1137944 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 11:13:00.778067 1137944 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 in local cache directory
	I1216 11:13:00.778170 1137944 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 to local cache
	I1216 11:13:00.825115 1137944 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1216 11:13:00.825160 1137944 cache.go:56] Caching tarball of preloaded images
	I1216 11:13:00.825352 1137944 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1216 11:13:00.828822 1137944 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1216 11:13:00.828857 1137944 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 11:13:00.917313 1137944 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1216 11:13:05.255306 1137944 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 as a tarball
	I1216 11:13:05.421361 1137944 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1216 11:13:05.421463 1137944 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	
	
	* The control-plane node download-only-333054 host does not exist
	  To start a cluster, run: "minikube start -p download-only-333054"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.10s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-333054
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.16s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (4.51s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-400206 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-400206 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (4.51196669s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (4.51s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1216 11:13:12.072687 1137938 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1216 11:13:12.072730 1137938 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20107-1132549/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-400206
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-400206: exit status 85 (92.358342ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-333054 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | -p download-only-333054        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| delete  | -p download-only-333054        | download-only-333054 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC | 16 Dec 24 11:13 UTC |
	| start   | -o=json --download-only        | download-only-400206 | jenkins | v1.34.0 | 16 Dec 24 11:13 UTC |                     |
	|         | -p download-only-400206        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/16 11:13:07
	Running on machine: ip-172-31-21-244
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1216 11:13:07.610265 1138144 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:13:07.610472 1138144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:07.610484 1138144 out.go:358] Setting ErrFile to fd 2...
	I1216 11:13:07.610489 1138144 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:13:07.610775 1138144 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:13:07.611238 1138144 out.go:352] Setting JSON to true
	I1216 11:13:07.612151 1138144 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":28533,"bootTime":1734319055,"procs":165,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:13:07.612222 1138144 start.go:139] virtualization:  
	I1216 11:13:07.615902 1138144 out.go:97] [download-only-400206] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 11:13:07.616137 1138144 notify.go:220] Checking for updates...
	I1216 11:13:07.619262 1138144 out.go:169] MINIKUBE_LOCATION=20107
	I1216 11:13:07.622259 1138144 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:13:07.625128 1138144 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:13:07.628319 1138144 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:13:07.631304 1138144 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1216 11:13:07.637029 1138144 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1216 11:13:07.637298 1138144 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:13:07.668511 1138144 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:13:07.668619 1138144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:07.724421 1138144 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 11:13:07.715008246 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:07.724526 1138144 docker.go:318] overlay module found
	I1216 11:13:07.727576 1138144 out.go:97] Using the docker driver based on user configuration
	I1216 11:13:07.727609 1138144 start.go:297] selected driver: docker
	I1216 11:13:07.727618 1138144 start.go:901] validating driver "docker" against <nil>
	I1216 11:13:07.727747 1138144 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:13:07.781792 1138144 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:45 SystemTime:2024-12-16 11:13:07.772978896 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:13:07.782007 1138144 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1216 11:13:07.782288 1138144 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1216 11:13:07.782442 1138144 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1216 11:13:07.785648 1138144 out.go:169] Using Docker driver with root privileges
	
	
	* The control-plane node download-only-400206 host does not exist
	  To start a cluster, run: "minikube start -p download-only-400206"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.22s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-400206
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.15s)

                                                
                                    
x
+
TestBinaryMirror (0.61s)

                                                
                                                
=== RUN   TestBinaryMirror
I1216 11:13:13.405843 1137938 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-469402 --alsologtostderr --binary-mirror http://127.0.0.1:43945 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-469402" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-469402
--- PASS: TestBinaryMirror (0.61s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-467441
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-467441: exit status 85 (65.762717ms)

                                                
                                                
-- stdout --
	* Profile "addons-467441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-467441
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-467441: exit status 85 (85.243181ms)

                                                
                                                
-- stdout --
	* Profile "addons-467441" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-467441"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.09s)

                                                
                                    
x
+
TestAddons/Setup (245.92s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-467441 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-467441 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (4m5.916052676s)
--- PASS: TestAddons/Setup (245.92s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.2s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-467441 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-467441 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.20s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-467441 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-467441 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f997b8ca-d450-479a-970a-4f57427f5a4d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f997b8ca-d450-479a-970a-4f57427f5a4d] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 10.003978957s
addons_test.go:633: (dbg) Run:  kubectl --context addons-467441 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-467441 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-467441 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-467441 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.89s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.12s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 11.193664ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-f5zh4" [e511e988-2365-410f-8684-de95a39675bf] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.004031842s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-x5969" [ebb5d950-3c97-4dff-b737-8817d4630dcc] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.003692164s
addons_test.go:331: (dbg) Run:  kubectl --context addons-467441 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-467441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-467441 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.918202674s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 ip
2024/12/16 11:17:56 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.12s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-sjbz4" [9e31de2f-58d0-443a-aa24-d36caf9644ad] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004901919s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable inspektor-gadget --alsologtostderr -v=1: (5.73983344s)
--- PASS: TestAddons/parallel/InspektorGadget (11.75s)

                                                
                                    
x
+
TestAddons/parallel/CSI (55.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1216 11:17:56.726611 1137938 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1216 11:17:56.735619 1137938 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1216 11:17:56.735652 1137938 kapi.go:107] duration metric: took 9.053373ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 9.078652ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-467441 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-467441 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [6e4478c1-efc1-4514-adc7-086a6b7e20e2] Pending
helpers_test.go:344: "task-pv-pod" [6e4478c1-efc1-4514-adc7-086a6b7e20e2] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [6e4478c1-efc1-4514-adc7-086a6b7e20e2] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 10.003324156s
addons_test.go:511: (dbg) Run:  kubectl --context addons-467441 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-467441 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-467441 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-467441 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-467441 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-467441 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [b65ca769-16c1-4d63-8a3d-9a45e56084e6] Pending
helpers_test.go:344: "task-pv-pod-restore" [b65ca769-16c1-4d63-8a3d-9a45e56084e6] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [b65ca769-16c1-4d63-8a3d-9a45e56084e6] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003831748s
addons_test.go:553: (dbg) Run:  kubectl --context addons-467441 delete pod task-pv-pod-restore
addons_test.go:557: (dbg) Run:  kubectl --context addons-467441 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-467441 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.807758895s)
--- PASS: TestAddons/parallel/CSI (55.55s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (16.83s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-467441 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-wrggq" [aab9c46b-f453-44d2-95e2-abfeb216d350] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-wrggq" [aab9c46b-f453-44d2-95e2-abfeb216d350] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-wrggq" [aab9c46b-f453-44d2-95e2-abfeb216d350] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 10.003387152s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable headlamp --alsologtostderr -v=1: (5.864318019s)
--- PASS: TestAddons/parallel/Headlamp (16.83s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-6fvnl" [4dbe6e1f-a347-404d-96a9-ef7fe48b7632] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003846766s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (6.59s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (51.42s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-467441 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-467441 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [9dacb534-de22-4046-a599-1147944eb97a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [9dacb534-de22-4046-a599-1147944eb97a] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [9dacb534-de22-4046-a599-1147944eb97a] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 3.004190719s
addons_test.go:906: (dbg) Run:  kubectl --context addons-467441 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 ssh "cat /opt/local-path-provisioner/pvc-983a0cf1-7667-42be-95ff-08973df1d4de_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-467441 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-467441 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.317903374s)
--- PASS: TestAddons/parallel/LocalPath (51.42s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-zh27s" [29ad869e-9aed-4717-ab7c-b8ba4cf3c784] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.003672026s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.52s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.72s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-hp8vn" [201b734a-855a-49fb-a6b8-6d45a22200f4] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004196963s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-467441 addons disable yakd --alsologtostderr -v=1: (5.716102321s)
--- PASS: TestAddons/parallel/Yakd (11.72s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.15s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-467441
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-467441: (11.865749443s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-467441
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-467441
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-467441
--- PASS: TestAddons/StoppedEnableDisable (12.15s)

                                                
                                    
x
+
TestCertOptions (34.78s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-130676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
E1216 12:07:20.831955 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-130676 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (32.0446527s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-130676 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-130676 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-130676 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-130676" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-130676
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-130676: (2.038707688s)
--- PASS: TestCertOptions (34.78s)

                                                
                                    
x
+
TestCertExpiration (250.56s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-292236 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-292236 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (40.288058108s)
E1216 12:07:04.607498 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-292236 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-292236 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (27.770348683s)
helpers_test.go:175: Cleaning up "cert-expiration-292236" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-292236
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-292236: (2.502700978s)
--- PASS: TestCertExpiration (250.56s)

                                                
                                    
x
+
TestForceSystemdFlag (38.4s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-632755 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-632755 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.338417115s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-632755 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-632755" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-632755
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-632755: (2.63309152s)
--- PASS: TestForceSystemdFlag (38.40s)

                                                
                                    
x
+
TestForceSystemdEnv (46.6s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-236758 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-236758 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (43.971806924s)
helpers_test.go:175: Cleaning up "force-systemd-env-236758" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-236758
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-236758: (2.631843965s)
--- PASS: TestForceSystemdEnv (46.60s)

                                                
                                    
x
+
TestErrorSpam/setup (31.66s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-158608 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158608 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-158608 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-158608 --driver=docker  --container-runtime=crio: (31.656121529s)
--- PASS: TestErrorSpam/setup (31.66s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.1s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 status
--- PASS: TestErrorSpam/status (1.10s)

                                                
                                    
x
+
TestErrorSpam/pause (1.89s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 pause
--- PASS: TestErrorSpam/pause (1.89s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.87s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 unpause
--- PASS: TestErrorSpam/unpause (1.87s)

                                                
                                    
x
+
TestErrorSpam/stop (1.53s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 stop: (1.317339295s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-158608 --log_dir /tmp/nospam-158608 stop
--- PASS: TestErrorSpam/stop (1.53s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20107-1132549/.minikube/files/etc/test/nested/copy/1137938/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (50.73s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-300067 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (50.731389859s)
--- PASS: TestFunctional/serial/StartWithProxy (50.73s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (32s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1216 11:25:39.355789 1137938 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-300067 --alsologtostderr -v=8: (32.004218844s)
functional_test.go:663: soft start took 32.004738683s for "functional-300067" cluster.
I1216 11:26:11.360311 1137938 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (32.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.05s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-300067 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:3.1: (1.461561204s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:3.3: (1.490256991s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 cache add registry.k8s.io/pause:latest: (1.429584792s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.38s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.4s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-300067 /tmp/TestFunctionalserialCacheCmdcacheadd_local2740885106/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache add minikube-local-cache-test:functional-300067
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache delete minikube-local-cache-test:functional-300067
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-300067
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.40s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.33s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (296.695358ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 cache reload: (1.227293489s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.15s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 kubectl -- --context functional-300067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-300067 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.14s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (34.49s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-300067 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (34.491937262s)
functional_test.go:761: restart took 34.49204208s for "functional-300067" cluster.
I1216 11:26:54.829121 1137938 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (34.49s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-300067 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.09s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.75s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 logs: (1.745266309s)
--- PASS: TestFunctional/serial/LogsCmd (1.75s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 logs --file /tmp/TestFunctionalserialLogsFileCmd4141695011/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 logs --file /tmp/TestFunctionalserialLogsFileCmd4141695011/001/logs.txt: (1.757777981s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.76s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.39s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-300067 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-300067
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-300067: exit status 115 (621.624028ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31151 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-300067 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.39s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 config get cpus: exit status 14 (76.170119ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 config get cpus: exit status 14 (73.152843ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (39.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300067 --alsologtostderr -v=1]
2024/12/16 11:29:30 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-300067 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 1167019: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (39.92s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300067 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (204.69683ms)

                                                
                                                
-- stdout --
	* [functional-300067] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:28:50.674954 1166781 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:28:50.675103 1166781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.675114 1166781 out.go:358] Setting ErrFile to fd 2...
	I1216 11:28:50.675119 1166781 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.675396 1166781 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:28:50.675742 1166781 out.go:352] Setting JSON to false
	I1216 11:28:50.676621 1166781 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29476,"bootTime":1734319055,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:28:50.676694 1166781 start.go:139] virtualization:  
	I1216 11:28:50.680202 1166781 out.go:177] * [functional-300067] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 11:28:50.683986 1166781 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:28:50.684058 1166781 notify.go:220] Checking for updates...
	I1216 11:28:50.689644 1166781 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:28:50.692564 1166781 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:28:50.695495 1166781 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:28:50.698425 1166781 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 11:28:50.701406 1166781 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:28:50.704876 1166781 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:28:50.705446 1166781 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:28:50.740939 1166781 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:28:50.741137 1166781 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:28:50.806274 1166781 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-12-16 11:28:50.790228115 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:28:50.806468 1166781 docker.go:318] overlay module found
	I1216 11:28:50.809670 1166781 out.go:177] * Using the docker driver based on existing profile
	I1216 11:28:50.812372 1166781 start.go:297] selected driver: docker
	I1216 11:28:50.812389 1166781 start.go:901] validating driver "docker" against &{Name:functional-300067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-300067 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:28:50.812493 1166781 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:28:50.816255 1166781 out.go:201] 
	W1216 11:28:50.819062 1166781 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1216 11:28:50.821849 1166781 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-300067 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-300067 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (192.856904ms)

                                                
                                                
-- stdout --
	* [functional-300067] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:28:50.484777 1166734 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:28:50.484944 1166734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.484956 1166734 out.go:358] Setting ErrFile to fd 2...
	I1216 11:28:50.484962 1166734 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:28:50.485333 1166734 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:28:50.485813 1166734 out.go:352] Setting JSON to false
	I1216 11:28:50.486801 1166734 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":29476,"bootTime":1734319055,"procs":189,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 11:28:50.486875 1166734 start.go:139] virtualization:  
	I1216 11:28:50.490252 1166734 out.go:177] * [functional-300067] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1216 11:28:50.494004 1166734 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 11:28:50.494083 1166734 notify.go:220] Checking for updates...
	I1216 11:28:50.499583 1166734 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 11:28:50.502430 1166734 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 11:28:50.505103 1166734 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 11:28:50.508030 1166734 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 11:28:50.510862 1166734 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 11:28:50.514218 1166734 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:28:50.514785 1166734 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 11:28:50.549899 1166734 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 11:28:50.550055 1166734 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:28:50.602675 1166734 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:51 SystemTime:2024-12-16 11:28:50.593084551 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:28:50.602790 1166734 docker.go:318] overlay module found
	I1216 11:28:50.605820 1166734 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1216 11:28:50.608575 1166734 start.go:297] selected driver: docker
	I1216 11:28:50.608595 1166734 start.go:901] validating driver "docker" against &{Name:functional-300067 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1733912881-20083@sha256:64d8b27f78fd269d886e21ba8fc88be20de183ba5cc5bce33d0810e8a65f1df2 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-300067 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1216 11:28:50.608691 1166734 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 11:28:50.612280 1166734 out.go:201] 
	W1216 11:28:50.615127 1166734 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1216 11:28:50.617948 1166734 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (36.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-300067 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-300067 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-g55vv" [92c19c5d-5184-43bc-9fbc-bdfd40aacfe1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
E1216 11:27:20.832494 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:20.838964 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:20.850485 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:20.871964 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:20.913451 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:20.994963 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:21.156524 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:21.477929 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:22.120041 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:23.401428 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:25.963372 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:31.085136 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:27:41.327166 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "hello-node-connect-65d86f57f4-g55vv" [92c19c5d-5184-43bc-9fbc-bdfd40aacfe1] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 36.003333893s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:30281
functional_test.go:1675: http://192.168.49.2:30281: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-g55vv

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:30281
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (36.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.70s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh -n functional-300067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cp functional-300067:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3509188220/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh -n functional-300067 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh -n functional-300067 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.41s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/1137938/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /etc/test/nested/copy/1137938/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/1137938.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /etc/ssl/certs/1137938.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/1137938.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /usr/share/ca-certificates/1137938.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/11379382.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /etc/ssl/certs/11379382.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/11379382.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /usr/share/ca-certificates/11379382.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.69s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-300067 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "sudo systemctl is-active docker": exit status 1 (264.688602ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "sudo systemctl is-active containerd": exit status 1 (263.204317ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 1163596: os: process already finished
helpers_test.go:502: unable to terminate pid 1163422: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-300067 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [b0756521-3fe5-4836-8c18-90a67081b524] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [b0756521-3fe5-4836-8c18-90a67081b524] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004138843s
I1216 11:27:15.061343 1137938 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-300067 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.09s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.109.36.251 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-300067 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-300067 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-300067 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-7zbmm" [3e84a8be-649e-45d8-9e57-104cffd79c1a] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-7zbmm" [3e84a8be-649e-45d8-9e57-104cffd79c1a] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.003989911s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service list -o json
functional_test.go:1494: Took "519.631382ms" to run "out/minikube-linux-arm64 -p functional-300067 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:31783
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:31783
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.58s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "372.851062ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "69.092987ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
E1216 11:28:01.808956 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:1366: Took "383.23107ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.818076ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.45s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (30.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdany-port2015962509/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734348481884177453" to /tmp/TestFunctionalparallelMountCmdany-port2015962509/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734348481884177453" to /tmp/TestFunctionalparallelMountCmdany-port2015962509/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734348481884177453" to /tmp/TestFunctionalparallelMountCmdany-port2015962509/001/test-1734348481884177453
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (349.280377ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:02.233729 1137938 retry.go:31] will retry after 442.187511ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 16 11:28 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 16 11:28 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 16 11:28 test-1734348481884177453
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh cat /mount-9p/test-1734348481884177453
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-300067 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [7c965a91-6a9c-4e72-b539-af5109fe6aeb] Pending
helpers_test.go:344: "busybox-mount" [7c965a91-6a9c-4e72-b539-af5109fe6aeb] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [7c965a91-6a9c-4e72-b539-af5109fe6aeb] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [7c965a91-6a9c-4e72-b539-af5109fe6aeb] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 28.003892165s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-300067 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdany-port2015962509/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (30.89s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T" /mount1: exit status 1 (555.841087ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1216 11:28:48.072570 1137938 retry.go:31] will retry after 467.944823ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-300067 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-300067 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3554718448/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.92s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 version -o=json --components: (1.110172553s)
--- PASS: TestFunctional/parallel/Version/components (1.11s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300067 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-300067
localhost/kicbase/echo-server:functional-300067
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300067 image ls --format short --alsologtostderr:
I1216 11:29:41.815011 1168510 out.go:345] Setting OutFile to fd 1 ...
I1216 11:29:41.815206 1168510 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:41.815237 1168510 out.go:358] Setting ErrFile to fd 2...
I1216 11:29:41.815263 1168510 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:41.815522 1168510 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:29:41.816209 1168510 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:41.816379 1168510 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:41.816935 1168510 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:29:41.834258 1168510 ssh_runner.go:195] Run: systemctl --version
I1216 11:29:41.834316 1168510 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:29:41.851847 1168510 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:29:41.941436 1168510 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300067 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/kube-apiserver          | v1.31.2            | f9c26480f1e72 | 92.6MB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | d6b061e73ae45 | 67MB   |
| gcr.io/k8s-minikube/busybox             | latest             | 71a676dd070f4 | 1.63MB |
| localhost/my-image                      | functional-300067  | 4dffa7f8f5512 | 1.64MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 021d242013305 | 96MB   |
| docker.io/library/nginx                 | alpine             | dba92e6b64886 | 58.3MB |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 9404aea098d9e | 87MB   |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| localhost/kicbase/echo-server           | functional-300067  | ce2d2cda2d858 | 4.79MB |
| localhost/minikube-local-cache-test     | functional-300067  | 7d6bf501566cb | 3.33kB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300067 image ls --format table --alsologtostderr:
I1216 11:29:46.319398 1168862 out.go:345] Setting OutFile to fd 1 ...
I1216 11:29:46.319589 1168862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:46.319619 1168862 out.go:358] Setting ErrFile to fd 2...
I1216 11:29:46.319644 1168862 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:46.320018 1168862 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:29:46.321083 1168862 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:46.321288 1168862 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:46.322033 1168862 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:29:46.340571 1168862 ssh_runner.go:195] Run: systemctl --version
I1216 11:29:46.340627 1168862 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:29:46.358905 1168862 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:29:46.449138 1168862 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300067 image ls --format json --alsologtostderr:
[{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26"],"repoTags":["docker.io/library/nginx:alpine"],"size":"58293755"},{"id":"7d6bf501566cbe722f3d6425694fd5bee6224a3709fa33435abb6fa995e6b451","repoDigests":["localhost/minikube-local-cache-test@sha256:3010921a50f92da436ff935d41d9336bcfdb2e16d787f7e73841e326cd7979d2"],"repoTags":["localhost/minikube-local-cache-test:functional-300067"],"size":"3330"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"
27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.15-0"],"size":"139912446"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"021d2420133054f8835987db659750ff639ab686377646
0264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe","registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a"],"repoTags":["registry.k8s.io/kube-proxy:v1.31.2"],"size":"95952789"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/k
ubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-300067"],"size":"4788229"},{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8e7caee5
c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"92632544"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67007814"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00
725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"5a77fbc4ed376c952f9d6daef78b582821522d6344c5fe1e7fa20989dcda0fbc","repoDigests":["docker.io/library/ceb50820891b4d51879ff7b25d0e688ef6b9b16cf13749332f176f6badedac97-tmp@sha256:08ffc5b82470e00f10ae03eb55e3be393ae3633702daf3167b0650416216e23e"],"repoTags":[],"size":"1637644"},{"id":"71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:a77fe109c026308f149d36484d795b42efe0fd29b332be9071f63e1634c36ac9","gcr.io/k8s-minikube/busybox@sha256:ca5ae90100d
50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b"],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1634527"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"4dffa7f8f5512432521d3507b5368191f761ac02c1919f32c3143acd9d36fa9e","repoDigests":["localhost/my-image@sha256:d924efb2824eedb2329dd76815a3bd81bea2cfeb65b06c94f1c6a070008dac2a"],"repoTags":["localhost/my-image:functional-300067"],"size":"1640226"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752","registry.k8s.io/kube-controller-manager@sha256:b8d5
1076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"86996294"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300067 image ls --format json --alsologtostderr:
I1216 11:29:46.089557 1168831 out.go:345] Setting OutFile to fd 1 ...
I1216 11:29:46.089760 1168831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:46.089789 1168831 out.go:358] Setting ErrFile to fd 2...
I1216 11:29:46.089809 1168831 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:46.090545 1168831 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:29:46.091874 1168831 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:46.092144 1168831 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:46.093099 1168831 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:29:46.112844 1168831 ssh_runner.go:195] Run: systemctl --version
I1216 11:29:46.112906 1168831 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:29:46.130852 1168831 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:29:46.221168 1168831 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300067 image ls --format yaml --alsologtostderr:
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26
repoTags:
- docker.io/library/nginx:alpine
size: "58293755"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-300067
size: "4788229"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "92632544"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
- registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "95952789"
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 7d6bf501566cbe722f3d6425694fd5bee6224a3709fa33435abb6fa995e6b451
repoDigests:
- localhost/minikube-local-cache-test@sha256:3010921a50f92da436ff935d41d9336bcfdb2e16d787f7e73841e326cd7979d2
repoTags:
- localhost/minikube-local-cache-test:functional-300067
size: "3330"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
- registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "86996294"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67007814"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300067 image ls --format yaml --alsologtostderr:
I1216 11:29:42.058183 1168542 out.go:345] Setting OutFile to fd 1 ...
I1216 11:29:42.058337 1168542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:42.058365 1168542 out.go:358] Setting ErrFile to fd 2...
I1216 11:29:42.058372 1168542 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:42.058655 1168542 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:29:42.059380 1168542 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:42.059552 1168542 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:42.060103 1168542 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:29:42.087630 1168542 ssh_runner.go:195] Run: systemctl --version
I1216 11:29:42.087701 1168542 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:29:42.108663 1168542 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:29:42.202216 1168542 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-300067 ssh pgrep buildkitd: exit status 1 (291.753116ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image build -t localhost/my-image:functional-300067 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 image build -t localhost/my-image:functional-300067 testdata/build --alsologtostderr: (3.244222954s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-300067 image build -t localhost/my-image:functional-300067 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 5a77fbc4ed3
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-300067
--> 4dffa7f8f55
Successfully tagged localhost/my-image:functional-300067
4dffa7f8f5512432521d3507b5368191f761ac02c1919f32c3143acd9d36fa9e
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-300067 image build -t localhost/my-image:functional-300067 testdata/build --alsologtostderr:
I1216 11:29:42.602831 1168637 out.go:345] Setting OutFile to fd 1 ...
I1216 11:29:42.603618 1168637 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:42.603635 1168637 out.go:358] Setting ErrFile to fd 2...
I1216 11:29:42.603643 1168637 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1216 11:29:42.603954 1168637 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
I1216 11:29:42.604828 1168637 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:42.605466 1168637 config.go:182] Loaded profile config "functional-300067": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1216 11:29:42.606010 1168637 cli_runner.go:164] Run: docker container inspect functional-300067 --format={{.State.Status}}
I1216 11:29:42.623759 1168637 ssh_runner.go:195] Run: systemctl --version
I1216 11:29:42.623819 1168637 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-300067
I1216 11:29:42.641538 1168637 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34251 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/functional-300067/id_rsa Username:docker}
I1216 11:29:42.733822 1168637 build_images.go:161] Building image from path: /tmp/build.1191227680.tar
I1216 11:29:42.733898 1168637 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1216 11:29:42.743408 1168637 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1191227680.tar
I1216 11:29:42.748995 1168637 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1191227680.tar: stat -c "%s %y" /var/lib/minikube/build/build.1191227680.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1191227680.tar': No such file or directory
I1216 11:29:42.749033 1168637 ssh_runner.go:362] scp /tmp/build.1191227680.tar --> /var/lib/minikube/build/build.1191227680.tar (3072 bytes)
I1216 11:29:42.779293 1168637 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1191227680
I1216 11:29:42.788911 1168637 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1191227680 -xf /var/lib/minikube/build/build.1191227680.tar
I1216 11:29:42.798958 1168637 crio.go:315] Building image: /var/lib/minikube/build/build.1191227680
I1216 11:29:42.799056 1168637 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-300067 /var/lib/minikube/build/build.1191227680 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1216 11:29:45.761657 1168637 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-300067 /var/lib/minikube/build/build.1191227680 --cgroup-manager=cgroupfs: (2.962572397s)
I1216 11:29:45.761732 1168637 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1191227680
I1216 11:29:45.771941 1168637 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1191227680.tar
I1216 11:29:45.780871 1168637 build_images.go:217] Built localhost/my-image:functional-300067 from /tmp/build.1191227680.tar
I1216 11:29:45.780905 1168637 build_images.go:133] succeeded building to: functional-300067
I1216 11:29:45.780910 1168637 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-300067
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.68s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image load --daemon kicbase/echo-server:functional-300067 --alsologtostderr
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-300067 image load --daemon kicbase/echo-server:functional-300067 --alsologtostderr: (1.118546356s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image load --daemon kicbase/echo-server:functional-300067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-300067
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image load --daemon kicbase/echo-server:functional-300067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image save kicbase/echo-server:functional-300067 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.54s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image rm kicbase/echo-server:functional-300067 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.57s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.83s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-300067
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 image save --daemon kicbase/echo-server:functional-300067 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-300067
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 update-context --alsologtostderr -v=2
E1216 11:30:04.693148 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-300067 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.14s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-300067
--- PASS: TestFunctional/delete_echo-server_images (0.05s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-300067
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-300067
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (176.3s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-116869 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 11:32:04.608947 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.615321 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.626709 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.648118 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.689547 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.770970 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:04.932571 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:05.253974 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:05.895641 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:07.178600 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:09.740609 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:14.862869 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:20.832482 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:25.104740 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:45.586905 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:32:48.534491 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-116869 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m55.437306962s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (176.30s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-116869 -- rollout status deployment/busybox: (6.443696826s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-r6bgq -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-rgjw5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-sbm9v -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-r6bgq -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-rgjw5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-sbm9v -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-r6bgq -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-rgjw5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-sbm9v -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-r6bgq -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-r6bgq -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-rgjw5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-rgjw5 -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-sbm9v -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-116869 -- exec busybox-7dff88458-sbm9v -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-116869 -v=7 --alsologtostderr
E1216 11:33:26.548892 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-116869 -v=7 --alsologtostderr: (35.701374301s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr: (1.168919838s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-116869 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.031065384s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.03s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (18.94s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp testdata/cp-test.txt ha-116869:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2205639880/001/cp-test_ha-116869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869:/home/docker/cp-test.txt ha-116869-m02:/home/docker/cp-test_ha-116869_ha-116869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test_ha-116869_ha-116869-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869:/home/docker/cp-test.txt ha-116869-m03:/home/docker/cp-test_ha-116869_ha-116869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test_ha-116869_ha-116869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869:/home/docker/cp-test.txt ha-116869-m04:/home/docker/cp-test_ha-116869_ha-116869-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test_ha-116869_ha-116869-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp testdata/cp-test.txt ha-116869-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2205639880/001/cp-test_ha-116869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m02:/home/docker/cp-test.txt ha-116869:/home/docker/cp-test_ha-116869-m02_ha-116869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test_ha-116869-m02_ha-116869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m02:/home/docker/cp-test.txt ha-116869-m03:/home/docker/cp-test_ha-116869-m02_ha-116869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test_ha-116869-m02_ha-116869-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m02:/home/docker/cp-test.txt ha-116869-m04:/home/docker/cp-test_ha-116869-m02_ha-116869-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test_ha-116869-m02_ha-116869-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp testdata/cp-test.txt ha-116869-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2205639880/001/cp-test_ha-116869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m03:/home/docker/cp-test.txt ha-116869:/home/docker/cp-test_ha-116869-m03_ha-116869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test_ha-116869-m03_ha-116869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m03:/home/docker/cp-test.txt ha-116869-m02:/home/docker/cp-test_ha-116869-m03_ha-116869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test_ha-116869-m03_ha-116869-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m03:/home/docker/cp-test.txt ha-116869-m04:/home/docker/cp-test_ha-116869-m03_ha-116869-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test_ha-116869-m03_ha-116869-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp testdata/cp-test.txt ha-116869-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile2205639880/001/cp-test_ha-116869-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m04:/home/docker/cp-test.txt ha-116869:/home/docker/cp-test_ha-116869-m04_ha-116869.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869 "sudo cat /home/docker/cp-test_ha-116869-m04_ha-116869.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m04:/home/docker/cp-test.txt ha-116869-m02:/home/docker/cp-test_ha-116869-m04_ha-116869-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m02 "sudo cat /home/docker/cp-test_ha-116869-m04_ha-116869-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 cp ha-116869-m04:/home/docker/cp-test.txt ha-116869-m03:/home/docker/cp-test_ha-116869-m04_ha-116869-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 ssh -n ha-116869-m03 "sudo cat /home/docker/cp-test_ha-116869-m04_ha-116869-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (18.94s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 node stop m02 -v=7 --alsologtostderr: (11.953672343s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr: exit status 7 (738.858741ms)

                                                
                                                
-- stdout --
	ha-116869
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-116869-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116869-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-116869-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:34:33.177727 1185213 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:34:33.177861 1185213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:33.177866 1185213 out.go:358] Setting ErrFile to fd 2...
	I1216 11:34:33.177871 1185213 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:34:33.178311 1185213 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:34:33.178615 1185213 out.go:352] Setting JSON to false
	I1216 11:34:33.178664 1185213 mustload.go:65] Loading cluster: ha-116869
	I1216 11:34:33.179464 1185213 config.go:182] Loaded profile config "ha-116869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:34:33.179493 1185213 status.go:174] checking status of ha-116869 ...
	I1216 11:34:33.180008 1185213 notify.go:220] Checking for updates...
	I1216 11:34:33.180723 1185213 cli_runner.go:164] Run: docker container inspect ha-116869 --format={{.State.Status}}
	I1216 11:34:33.199901 1185213 status.go:371] ha-116869 host status = "Running" (err=<nil>)
	I1216 11:34:33.199929 1185213 host.go:66] Checking if "ha-116869" exists ...
	I1216 11:34:33.200249 1185213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116869
	I1216 11:34:33.222893 1185213 host.go:66] Checking if "ha-116869" exists ...
	I1216 11:34:33.223277 1185213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:34:33.223333 1185213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116869
	I1216 11:34:33.249082 1185213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34256 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/ha-116869/id_rsa Username:docker}
	I1216 11:34:33.342317 1185213 ssh_runner.go:195] Run: systemctl --version
	I1216 11:34:33.346908 1185213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:34:33.359181 1185213 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:34:33.421596 1185213 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-16 11:34:33.411057142 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:34:33.422254 1185213 kubeconfig.go:125] found "ha-116869" server: "https://192.168.49.254:8443"
	I1216 11:34:33.422296 1185213 api_server.go:166] Checking apiserver status ...
	I1216 11:34:33.422348 1185213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:34:33.434320 1185213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1387/cgroup
	I1216 11:34:33.444524 1185213 api_server.go:182] apiserver freezer: "4:freezer:/docker/46d82ca014146a2c3594c337670276127df5022086e3c57bab11adee0b974063/crio/crio-1df55191ae30ac904454b8f95262671699470a44c9345d2e8e41074a2dc2429f"
	I1216 11:34:33.444598 1185213 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/46d82ca014146a2c3594c337670276127df5022086e3c57bab11adee0b974063/crio/crio-1df55191ae30ac904454b8f95262671699470a44c9345d2e8e41074a2dc2429f/freezer.state
	I1216 11:34:33.453702 1185213 api_server.go:204] freezer state: "THAWED"
	I1216 11:34:33.453742 1185213 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 11:34:33.461455 1185213 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 11:34:33.461482 1185213 status.go:463] ha-116869 apiserver status = Running (err=<nil>)
	I1216 11:34:33.461493 1185213 status.go:176] ha-116869 status: &{Name:ha-116869 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:34:33.461509 1185213 status.go:174] checking status of ha-116869-m02 ...
	I1216 11:34:33.461825 1185213 cli_runner.go:164] Run: docker container inspect ha-116869-m02 --format={{.State.Status}}
	I1216 11:34:33.480416 1185213 status.go:371] ha-116869-m02 host status = "Stopped" (err=<nil>)
	I1216 11:34:33.480441 1185213 status.go:384] host is not running, skipping remaining checks
	I1216 11:34:33.480456 1185213 status.go:176] ha-116869-m02 status: &{Name:ha-116869-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:34:33.480475 1185213 status.go:174] checking status of ha-116869-m03 ...
	I1216 11:34:33.480825 1185213 cli_runner.go:164] Run: docker container inspect ha-116869-m03 --format={{.State.Status}}
	I1216 11:34:33.497985 1185213 status.go:371] ha-116869-m03 host status = "Running" (err=<nil>)
	I1216 11:34:33.498011 1185213 host.go:66] Checking if "ha-116869-m03" exists ...
	I1216 11:34:33.498359 1185213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116869-m03
	I1216 11:34:33.515408 1185213 host.go:66] Checking if "ha-116869-m03" exists ...
	I1216 11:34:33.515829 1185213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:34:33.515877 1185213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116869-m03
	I1216 11:34:33.534266 1185213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34266 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/ha-116869-m03/id_rsa Username:docker}
	I1216 11:34:33.626358 1185213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:34:33.638520 1185213 kubeconfig.go:125] found "ha-116869" server: "https://192.168.49.254:8443"
	I1216 11:34:33.638549 1185213 api_server.go:166] Checking apiserver status ...
	I1216 11:34:33.638591 1185213 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:34:33.650186 1185213 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1313/cgroup
	I1216 11:34:33.661990 1185213 api_server.go:182] apiserver freezer: "4:freezer:/docker/4df72aada4511c946df08ca6fcb5b6fd52f54ffb2c2823c36b586f6f5bcc8af2/crio/crio-d631a8d3de738f71c200f0fba27c59aa067402f8aa7b1d67047ffb903c6fe7f8"
	I1216 11:34:33.662066 1185213 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/4df72aada4511c946df08ca6fcb5b6fd52f54ffb2c2823c36b586f6f5bcc8af2/crio/crio-d631a8d3de738f71c200f0fba27c59aa067402f8aa7b1d67047ffb903c6fe7f8/freezer.state
	I1216 11:34:33.671586 1185213 api_server.go:204] freezer state: "THAWED"
	I1216 11:34:33.671642 1185213 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1216 11:34:33.680804 1185213 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1216 11:34:33.680834 1185213 status.go:463] ha-116869-m03 apiserver status = Running (err=<nil>)
	I1216 11:34:33.680848 1185213 status.go:176] ha-116869-m03 status: &{Name:ha-116869-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:34:33.680869 1185213 status.go:174] checking status of ha-116869-m04 ...
	I1216 11:34:33.681267 1185213 cli_runner.go:164] Run: docker container inspect ha-116869-m04 --format={{.State.Status}}
	I1216 11:34:33.700821 1185213 status.go:371] ha-116869-m04 host status = "Running" (err=<nil>)
	I1216 11:34:33.700853 1185213 host.go:66] Checking if "ha-116869-m04" exists ...
	I1216 11:34:33.701161 1185213 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-116869-m04
	I1216 11:34:33.719999 1185213 host.go:66] Checking if "ha-116869-m04" exists ...
	I1216 11:34:33.720318 1185213 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:34:33.720365 1185213 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-116869-m04
	I1216 11:34:33.749471 1185213 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34271 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/ha-116869-m04/id_rsa Username:docker}
	I1216 11:34:33.845923 1185213 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:34:33.859572 1185213 status.go:176] ha-116869-m04 status: &{Name:ha-116869-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.8s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.80s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (24.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 node start m02 -v=7 --alsologtostderr
E1216 11:34:48.471036 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 node start m02 -v=7 --alsologtostderr: (23.263719469s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr: (1.429987616s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (24.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.447160947s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (206.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-116869 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-116869 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-116869 -v=7 --alsologtostderr: (37.16426654s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-116869 --wait=true -v=7 --alsologtostderr
E1216 11:37:04.607456 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:37:20.832457 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:37:32.313165 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-116869 --wait=true -v=7 --alsologtostderr: (2m48.698468631s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-116869
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (206.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 node delete m03 -v=7 --alsologtostderr: (11.660753464s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 stop -v=7 --alsologtostderr
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 stop -v=7 --alsologtostderr: (35.573879537s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr: exit status 7 (135.013757ms)

                                                
                                                
-- stdout --
	ha-116869
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116869-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-116869-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:39:16.121638 1199656 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:39:16.121863 1199656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:39:16.121900 1199656 out.go:358] Setting ErrFile to fd 2...
	I1216 11:39:16.121920 1199656 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:39:16.122702 1199656 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:39:16.122994 1199656 out.go:352] Setting JSON to false
	I1216 11:39:16.123073 1199656 mustload.go:65] Loading cluster: ha-116869
	I1216 11:39:16.123169 1199656 notify.go:220] Checking for updates...
	I1216 11:39:16.123589 1199656 config.go:182] Loaded profile config "ha-116869": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:39:16.123605 1199656 status.go:174] checking status of ha-116869 ...
	I1216 11:39:16.124253 1199656 cli_runner.go:164] Run: docker container inspect ha-116869 --format={{.State.Status}}
	I1216 11:39:16.144676 1199656 status.go:371] ha-116869 host status = "Stopped" (err=<nil>)
	I1216 11:39:16.144706 1199656 status.go:384] host is not running, skipping remaining checks
	I1216 11:39:16.144713 1199656 status.go:176] ha-116869 status: &{Name:ha-116869 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:39:16.144774 1199656 status.go:174] checking status of ha-116869-m02 ...
	I1216 11:39:16.145089 1199656 cli_runner.go:164] Run: docker container inspect ha-116869-m02 --format={{.State.Status}}
	I1216 11:39:16.179931 1199656 status.go:371] ha-116869-m02 host status = "Stopped" (err=<nil>)
	I1216 11:39:16.179955 1199656 status.go:384] host is not running, skipping remaining checks
	I1216 11:39:16.179962 1199656 status.go:176] ha-116869-m02 status: &{Name:ha-116869-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:39:16.179980 1199656 status.go:174] checking status of ha-116869-m04 ...
	I1216 11:39:16.180312 1199656 cli_runner.go:164] Run: docker container inspect ha-116869-m04 --format={{.State.Status}}
	I1216 11:39:16.198278 1199656 status.go:371] ha-116869-m04 host status = "Stopped" (err=<nil>)
	I1216 11:39:16.198301 1199656 status.go:384] host is not running, skipping remaining checks
	I1216 11:39:16.198309 1199656 status.go:176] ha-116869-m04 status: &{Name:ha-116869-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (121.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-116869 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-116869 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m0.136177598s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (121.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.75s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (72.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-116869 --control-plane -v=7 --alsologtostderr
E1216 11:42:04.607025 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:42:20.831560 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-116869 --control-plane -v=7 --alsologtostderr: (1m11.323805648s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-116869 status -v=7 --alsologtostderr: (1.019114732s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (72.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.99s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.9s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-022278 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-022278 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (51.895662084s)
--- PASS: TestJSONOutput/start/Command (51.90s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.73s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-022278 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.73s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.69s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-022278 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.69s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-022278 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-022278 --output=json --user=testUser: (5.870140441s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.26s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-142736 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-142736 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (102.712805ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"f2a60add-bc06-4447-bb2d-1942d8d8b977","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-142736] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"d6bb486c-7ac0-4c18-bd99-c2d061bf099d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"f02edb84-8157-41df-a839-8d04d58e8bea","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"23671768-589b-45ac-8883-dd27af0eafab","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig"}}
	{"specversion":"1.0","id":"10baaa95-d20e-46ac-b247-f62ac5062e71","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube"}}
	{"specversion":"1.0","id":"5ef5361e-5b50-491d-9069-600905dc50e5","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"40e69048-1a22-4507-a89f-08cc75bace6d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"ecadfc7b-ab75-4daa-8a62-5123ba13d33e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-142736" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-142736
--- PASS: TestErrorJSONOutput (0.26s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (37.79s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-656523 --network=
E1216 11:43:43.896938 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-656523 --network=: (35.621847905s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-656523" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-656523
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-656523: (2.143859094s)
--- PASS: TestKicCustomNetwork/create_custom_network (37.79s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.32s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-629650 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-629650 --network=bridge: (34.317111304s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-629650" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-629650
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-629650: (1.973447986s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.32s)

                                                
                                    
x
+
TestKicExistingNetwork (32.74s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1216 11:44:57.250985 1137938 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1216 11:44:57.266836 1137938 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1216 11:44:57.266927 1137938 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1216 11:44:57.266950 1137938 cli_runner.go:164] Run: docker network inspect existing-network
W1216 11:44:57.282071 1137938 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1216 11:44:57.282103 1137938 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1216 11:44:57.282120 1137938 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1216 11:44:57.282315 1137938 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1216 11:44:57.300823 1137938 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-0e82c425ed05 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:a3:ce:51:99} reservation:<nil>}
I1216 11:44:57.301402 1137938 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x40016dd3e0}
I1216 11:44:57.301441 1137938 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1216 11:44:57.301502 1137938 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1216 11:44:57.372394 1137938 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-423606 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-423606 --network=existing-network: (30.611602768s)
helpers_test.go:175: Cleaning up "existing-network-423606" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-423606
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-423606: (1.974165634s)
I1216 11:45:29.974623 1137938 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (32.74s)

                                                
                                    
x
+
TestKicCustomSubnet (33.76s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-791567 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-791567 --subnet=192.168.60.0/24: (31.510431077s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-791567 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-791567" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-791567
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-791567: (2.210874471s)
--- PASS: TestKicCustomSubnet (33.76s)

                                                
                                    
x
+
TestKicStaticIP (32.46s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-866460 --static-ip=192.168.200.200
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-866460 --static-ip=192.168.200.200: (30.121305873s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-866460 ip
helpers_test.go:175: Cleaning up "static-ip-866460" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-866460
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-866460: (2.160005048s)
--- PASS: TestKicStaticIP (32.46s)

                                                
                                    
x
+
TestMainNoArgs (0.05s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.05s)

                                                
                                    
x
+
TestMinikubeProfile (70.28s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-366118 --driver=docker  --container-runtime=crio
E1216 11:47:04.607510 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-366118 --driver=docker  --container-runtime=crio: (31.272200139s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-369187 --driver=docker  --container-runtime=crio
E1216 11:47:20.832413 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-369187 --driver=docker  --container-runtime=crio: (33.237111901s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-366118
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-369187
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-369187" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-369187
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-369187: (2.037066024s)
helpers_test.go:175: Cleaning up "first-366118" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-366118
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-366118: (2.288021584s)
--- PASS: TestMinikubeProfile (70.28s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-955869 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-955869 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.991593716s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.99s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-955869 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.43s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-957682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-957682 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.425663305s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.43s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-957682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.26s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-955869 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-955869 --alsologtostderr -v=5: (1.60952859s)
--- PASS: TestMountStart/serial/DeleteFirst (1.61s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-957682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.27s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.2s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-957682
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-957682: (1.195768266s)
--- PASS: TestMountStart/serial/Stop (1.20s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (7.64s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-957682
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-957682: (6.640496767s)
--- PASS: TestMountStart/serial/RestartStopped (7.64s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-957682 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (77.63s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175261 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1216 11:48:27.675270 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175261 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m17.115402047s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (77.63s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (6.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-175261 -- rollout status deployment/busybox: (4.463885848s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-hsfr4 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-kcdns -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-hsfr4 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-kcdns -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-hsfr4 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-kcdns -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (6.44s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-hsfr4 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-hsfr4 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-kcdns -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-175261 -- exec busybox-7dff88458-kcdns -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.05s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (27.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-175261 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-175261 -v 3 --alsologtostderr: (27.275081632s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (27.97s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-175261 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.09s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.70s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (10.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp testdata/cp-test.txt multinode-175261:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3635573858/001/cp-test_multinode-175261.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261:/home/docker/cp-test.txt multinode-175261-m02:/home/docker/cp-test_multinode-175261_multinode-175261-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test_multinode-175261_multinode-175261-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261:/home/docker/cp-test.txt multinode-175261-m03:/home/docker/cp-test_multinode-175261_multinode-175261-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test_multinode-175261_multinode-175261-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp testdata/cp-test.txt multinode-175261-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3635573858/001/cp-test_multinode-175261-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m02:/home/docker/cp-test.txt multinode-175261:/home/docker/cp-test_multinode-175261-m02_multinode-175261.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test_multinode-175261-m02_multinode-175261.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m02:/home/docker/cp-test.txt multinode-175261-m03:/home/docker/cp-test_multinode-175261-m02_multinode-175261-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test_multinode-175261-m02_multinode-175261-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp testdata/cp-test.txt multinode-175261-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile3635573858/001/cp-test_multinode-175261-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m03:/home/docker/cp-test.txt multinode-175261:/home/docker/cp-test_multinode-175261-m03_multinode-175261.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261 "sudo cat /home/docker/cp-test_multinode-175261-m03_multinode-175261.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 cp multinode-175261-m03:/home/docker/cp-test.txt multinode-175261-m02:/home/docker/cp-test_multinode-175261-m03_multinode-175261-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 ssh -n multinode-175261-m02 "sudo cat /home/docker/cp-test_multinode-175261-m03_multinode-175261-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (10.21s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-175261 node stop m03: (1.222633757s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175261 status: exit status 7 (530.230406ms)

                                                
                                                
-- stdout --
	multinode-175261
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175261-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175261-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr: exit status 7 (524.509023ms)

                                                
                                                
-- stdout --
	multinode-175261
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-175261-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-175261-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:50:23.167743 1254025 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:50:23.167941 1254025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:50:23.167958 1254025 out.go:358] Setting ErrFile to fd 2...
	I1216 11:50:23.167968 1254025 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:50:23.168315 1254025 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:50:23.168579 1254025 out.go:352] Setting JSON to false
	I1216 11:50:23.168617 1254025 mustload.go:65] Loading cluster: multinode-175261
	I1216 11:50:23.168715 1254025 notify.go:220] Checking for updates...
	I1216 11:50:23.169235 1254025 config.go:182] Loaded profile config "multinode-175261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:50:23.169266 1254025 status.go:174] checking status of multinode-175261 ...
	I1216 11:50:23.170604 1254025 cli_runner.go:164] Run: docker container inspect multinode-175261 --format={{.State.Status}}
	I1216 11:50:23.189786 1254025 status.go:371] multinode-175261 host status = "Running" (err=<nil>)
	I1216 11:50:23.189817 1254025 host.go:66] Checking if "multinode-175261" exists ...
	I1216 11:50:23.190141 1254025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175261
	I1216 11:50:23.211965 1254025 host.go:66] Checking if "multinode-175261" exists ...
	I1216 11:50:23.212272 1254025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:50:23.212320 1254025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175261
	I1216 11:50:23.239013 1254025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34376 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/multinode-175261/id_rsa Username:docker}
	I1216 11:50:23.338377 1254025 ssh_runner.go:195] Run: systemctl --version
	I1216 11:50:23.343108 1254025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:50:23.355431 1254025 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 11:50:23.412442 1254025 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-16 11:50:23.402935111 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 11:50:23.413107 1254025 kubeconfig.go:125] found "multinode-175261" server: "https://192.168.67.2:8443"
	I1216 11:50:23.413141 1254025 api_server.go:166] Checking apiserver status ...
	I1216 11:50:23.413191 1254025 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1216 11:50:23.424657 1254025 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1384/cgroup
	I1216 11:50:23.434528 1254025 api_server.go:182] apiserver freezer: "4:freezer:/docker/43762ea1b8fe512cc02780bd40713d6ca1cebdd70ec16b3acb5794180924299c/crio/crio-ffa7486b091c1c571b01d7c1a3e05238151da0abefe52788ff6b961db94b7181"
	I1216 11:50:23.434602 1254025 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/43762ea1b8fe512cc02780bd40713d6ca1cebdd70ec16b3acb5794180924299c/crio/crio-ffa7486b091c1c571b01d7c1a3e05238151da0abefe52788ff6b961db94b7181/freezer.state
	I1216 11:50:23.443878 1254025 api_server.go:204] freezer state: "THAWED"
	I1216 11:50:23.443907 1254025 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1216 11:50:23.451846 1254025 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1216 11:50:23.451876 1254025 status.go:463] multinode-175261 apiserver status = Running (err=<nil>)
	I1216 11:50:23.451897 1254025 status.go:176] multinode-175261 status: &{Name:multinode-175261 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:50:23.451920 1254025 status.go:174] checking status of multinode-175261-m02 ...
	I1216 11:50:23.452228 1254025 cli_runner.go:164] Run: docker container inspect multinode-175261-m02 --format={{.State.Status}}
	I1216 11:50:23.470524 1254025 status.go:371] multinode-175261-m02 host status = "Running" (err=<nil>)
	I1216 11:50:23.470550 1254025 host.go:66] Checking if "multinode-175261-m02" exists ...
	I1216 11:50:23.470856 1254025 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-175261-m02
	I1216 11:50:23.488581 1254025 host.go:66] Checking if "multinode-175261-m02" exists ...
	I1216 11:50:23.488993 1254025 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1216 11:50:23.489037 1254025 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-175261-m02
	I1216 11:50:23.507218 1254025 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:34381 SSHKeyPath:/home/jenkins/minikube-integration/20107-1132549/.minikube/machines/multinode-175261-m02/id_rsa Username:docker}
	I1216 11:50:23.597998 1254025 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1216 11:50:23.609974 1254025 status.go:176] multinode-175261-m02 status: &{Name:multinode-175261-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:50:23.610015 1254025 status.go:174] checking status of multinode-175261-m03 ...
	I1216 11:50:23.610322 1254025 cli_runner.go:164] Run: docker container inspect multinode-175261-m03 --format={{.State.Status}}
	I1216 11:50:23.628734 1254025 status.go:371] multinode-175261-m03 host status = "Stopped" (err=<nil>)
	I1216 11:50:23.628819 1254025 status.go:384] host is not running, skipping remaining checks
	I1216 11:50:23.628827 1254025 status.go:176] multinode-175261-m03 status: &{Name:multinode-175261-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.28s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-175261 node start m03 -v=7 --alsologtostderr: (9.20845798s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (105.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175261
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-175261
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-175261: (24.797751554s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175261 --wait=true -v=8 --alsologtostderr
E1216 11:52:04.607259 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175261 --wait=true -v=8 --alsologtostderr: (1m21.033064862s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175261
--- PASS: TestMultiNode/serial/RestartKeepsNodes (105.97s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 node delete m03
E1216 11:52:20.831689 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-175261 node delete m03: (4.940521587s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.64s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-175261 stop: (23.577898948s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175261 status: exit status 7 (98.985598ms)

                                                
                                                
-- stdout --
	multinode-175261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175261-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr: exit status 7 (107.797848ms)

                                                
                                                
-- stdout --
	multinode-175261
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-175261-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 11:52:48.947630 1261745 out.go:345] Setting OutFile to fd 1 ...
	I1216 11:52:48.947769 1261745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:52:48.947780 1261745 out.go:358] Setting ErrFile to fd 2...
	I1216 11:52:48.947785 1261745 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 11:52:48.948124 1261745 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 11:52:48.948347 1261745 out.go:352] Setting JSON to false
	I1216 11:52:48.948372 1261745 mustload.go:65] Loading cluster: multinode-175261
	I1216 11:52:48.949152 1261745 config.go:182] Loaded profile config "multinode-175261": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 11:52:48.949184 1261745 status.go:174] checking status of multinode-175261 ...
	I1216 11:52:48.949948 1261745 cli_runner.go:164] Run: docker container inspect multinode-175261 --format={{.State.Status}}
	I1216 11:52:48.952479 1261745 notify.go:220] Checking for updates...
	I1216 11:52:48.969948 1261745 status.go:371] multinode-175261 host status = "Stopped" (err=<nil>)
	I1216 11:52:48.969979 1261745 status.go:384] host is not running, skipping remaining checks
	I1216 11:52:48.969988 1261745 status.go:176] multinode-175261 status: &{Name:multinode-175261 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1216 11:52:48.970022 1261745 status.go:174] checking status of multinode-175261-m02 ...
	I1216 11:52:48.970345 1261745 cli_runner.go:164] Run: docker container inspect multinode-175261-m02 --format={{.State.Status}}
	I1216 11:52:49.000219 1261745 status.go:371] multinode-175261-m02 host status = "Stopped" (err=<nil>)
	I1216 11:52:49.000245 1261745 status.go:384] host is not running, skipping remaining checks
	I1216 11:52:49.000252 1261745 status.go:176] multinode-175261-m02 status: &{Name:multinode-175261-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.79s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175261 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175261 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.720472089s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-175261 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.45s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.91s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-175261
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175261-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-175261-m02 --driver=docker  --container-runtime=crio: exit status 14 (102.062395ms)

                                                
                                                
-- stdout --
	* [multinode-175261-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-175261-m02' is duplicated with machine name 'multinode-175261-m02' in profile 'multinode-175261'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-175261-m03 --driver=docker  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-175261-m03 --driver=docker  --container-runtime=crio: (31.434195092s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-175261
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-175261: exit status 80 (337.181489ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-175261 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-175261-m03 already exists in multinode-175261-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_1.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-175261-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-175261-m03: (1.976778688s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.91s)

                                                
                                    
x
+
TestPreload (126.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-698174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-698174 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.374336807s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-698174 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-698174 image pull gcr.io/k8s-minikube/busybox: (3.247753863s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-698174
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-698174: (5.736708783s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-698174 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-698174 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (18.989926952s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-698174 image list
helpers_test.go:175: Cleaning up "test-preload-698174" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-698174
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-698174: (2.490622952s)
--- PASS: TestPreload (126.15s)

                                                
                                    
x
+
TestScheduledStopUnix (107.89s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-162464 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-162464 --memory=2048 --driver=docker  --container-runtime=crio: (31.49470128s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-162464 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-162464 -n scheduled-stop-162464
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-162464 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1216 11:56:58.663801 1137938 retry.go:31] will retry after 91.704µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.664338 1137938 retry.go:31] will retry after 218.908µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.666751 1137938 retry.go:31] will retry after 255.399µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.667875 1137938 retry.go:31] will retry after 277.395µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.668998 1137938 retry.go:31] will retry after 666.896µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.670115 1137938 retry.go:31] will retry after 863.939µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.671239 1137938 retry.go:31] will retry after 986.095µs: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.672342 1137938 retry.go:31] will retry after 1.901489ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.674568 1137938 retry.go:31] will retry after 2.462517ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.677788 1137938 retry.go:31] will retry after 4.19977ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.683198 1137938 retry.go:31] will retry after 3.620558ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.687518 1137938 retry.go:31] will retry after 9.533417ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.697820 1137938 retry.go:31] will retry after 17.416686ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.716139 1137938 retry.go:31] will retry after 13.084677ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.730373 1137938 retry.go:31] will retry after 28.180836ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
I1216 11:56:58.759616 1137938 retry.go:31] will retry after 24.362431ms: open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/scheduled-stop-162464/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-162464 --cancel-scheduled
E1216 11:57:04.608073 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 11:57:20.831755 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-162464 -n scheduled-stop-162464
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-162464
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-162464 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-162464
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-162464: exit status 7 (77.326371ms)

                                                
                                                
-- stdout --
	scheduled-stop-162464
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-162464 -n scheduled-stop-162464
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-162464 -n scheduled-stop-162464: exit status 7 (72.905032ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-162464" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-162464
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-162464: (4.775945775s)
--- PASS: TestScheduledStopUnix (107.89s)

                                                
                                    
x
+
TestInsufficientStorage (10.19s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-188285 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-188285 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (7.664839005s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"710a3052-cfb5-414c-adac-6cc44c1edc44","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-188285] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ace9e5b5-0921-40df-87ce-4874a568661c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20107"}}
	{"specversion":"1.0","id":"07a6f0e9-82ef-4e77-92ee-25b2718bc658","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5f58632e-5ab1-4f81-a3c4-3e8a1490fe0d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig"}}
	{"specversion":"1.0","id":"dfd16c29-1454-4ac4-94b7-2842846bbd07","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube"}}
	{"specversion":"1.0","id":"a36c8dd7-1e3c-4a7d-8bda-34dfd6b735d0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"e6d80265-3649-4cbf-8341-fe2151da59aa","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"c2dff6c4-8530-42cf-b5e6-29d79d90d18e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"b454cb80-052e-4e02-b3c4-c5e354786991","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"0eeac950-202e-4df7-b1f8-e18b2bedbb06","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"8a68d7d5-cb03-496a-a3d1-fc07485693ac","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"579fb589-8048-4ca8-ba2d-cae22f70e0d1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-188285\" primary control-plane node in \"insufficient-storage-188285\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"1b1bd84a-627f-4f10-96d9-0a1a40917c68","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1733912881-20083 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"f7fbe20a-1f2f-44cb-a8cf-fdefd35153e2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"303e2dcf-8c33-4325-a529-0798791b5172","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-188285 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-188285 --output=json --layout=cluster: exit status 7 (286.683917ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-188285","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-188285","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:58:22.473929 1279574 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-188285" does not appear in /home/jenkins/minikube-integration/20107-1132549/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-188285 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-188285 --output=json --layout=cluster: exit status 7 (282.371653ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-188285","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-188285","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1216 11:58:22.755640 1279636 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-188285" does not appear in /home/jenkins/minikube-integration/20107-1132549/kubeconfig
	E1216 11:58:22.765794 1279636 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/insufficient-storage-188285/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-188285" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-188285
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-188285: (1.960184327s)
--- PASS: TestInsufficientStorage (10.19s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (107.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.770376082 start -p running-upgrade-329303 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.770376082 start -p running-upgrade-329303 --memory=2200 --vm-driver=docker  --container-runtime=crio: (34.460146711s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-329303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-329303 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m9.365008384s)
helpers_test.go:175: Cleaning up "running-upgrade-329303" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-329303
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-329303: (3.110862605s)
--- PASS: TestRunningBinaryUpgrade (107.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (392.66s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.399953536s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-589919
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-589919: (1.474716564s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-589919 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-589919 status --format={{.Host}}: exit status 7 (76.028731ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m42.983135931s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-589919 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (109.939386ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-589919] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-589919
	    minikube start -p kubernetes-upgrade-589919 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-5899192 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-589919 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-589919 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (31.041939623s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-589919" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-589919
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-589919: (2.413922036s)
--- PASS: TestKubernetesUpgrade (392.66s)

                                                
                                    
x
+
TestMissingContainerUpgrade (167.95s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.3043794334 start -p missing-upgrade-140443 --memory=2200 --driver=docker  --container-runtime=crio
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.3043794334 start -p missing-upgrade-140443 --memory=2200 --driver=docker  --container-runtime=crio: (1m31.411611646s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-140443
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-140443: (10.412885435s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-140443
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-140443 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 12:00:23.900467 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-140443 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m3.056461082s)
helpers_test.go:175: Cleaning up "missing-upgrade-140443" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-140443
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-140443: (2.419454949s)
--- PASS: TestMissingContainerUpgrade (167.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (101.110657ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-559509] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (41.48s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-559509 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-559509 --driver=docker  --container-runtime=crio: (40.890142502s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-559509 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (41.48s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --driver=docker  --container-runtime=crio: (6.002167145s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-559509 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-559509 status -o json: exit status 2 (324.662ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-559509","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-559509
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-559509: (2.08852601s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (7.62s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-559509 --no-kubernetes --driver=docker  --container-runtime=crio: (7.621570034s)
--- PASS: TestNoKubernetes/serial/Start (7.62s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-559509 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-559509 "sudo systemctl is-active --quiet service kubelet": exit status 1 (271.224072ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.03s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.27s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-559509
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-559509: (1.271779268s)
--- PASS: TestNoKubernetes/serial/Stop (1.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-559509 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-559509 --driver=docker  --container-runtime=crio: (7.751115231s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-559509 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-559509 "sudo systemctl is-active --quiet service kubelet": exit status 1 (367.771826ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.37s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.6s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.60s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (71.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.3928645402 start -p stopped-upgrade-575072 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.3928645402 start -p stopped-upgrade-575072 --memory=2200 --vm-driver=docker  --container-runtime=crio: (38.630394315s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.3928645402 -p stopped-upgrade-575072 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.3928645402 -p stopped-upgrade-575072 stop: (2.430598933s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-575072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1216 12:02:04.607978 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:02:20.831883 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-575072 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (30.845223471s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (71.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-575072
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.00s)

                                                
                                    
x
+
TestPause/serial/Start (55.48s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-225141 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
E1216 12:05:07.677396 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-225141 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (55.479551557s)
--- PASS: TestPause/serial/Start (55.48s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (24.25s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-225141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-225141 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.222466369s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (24.25s)

                                                
                                    
x
+
TestPause/serial/Pause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-225141 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-225141 --alsologtostderr -v=5: (1.044521736s)
--- PASS: TestPause/serial/Pause (1.04s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.42s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-225141 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-225141 --output=json --layout=cluster: exit status 2 (418.668828ms)

                                                
                                                
-- stdout --
	{"Name":"pause-225141","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-225141","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.42s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.07s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-225141 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-225141 --alsologtostderr -v=5: (1.069054902s)
--- PASS: TestPause/serial/Unpause (1.07s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.37s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-225141 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-225141 --alsologtostderr -v=5: (1.374792891s)
--- PASS: TestPause/serial/PauseAgain (1.37s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (3.03s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-225141 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-225141 --alsologtostderr -v=5: (3.034909549s)
--- PASS: TestPause/serial/DeletePaused (3.03s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-225141
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-225141: exit status 1 (16.955474ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-225141: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (5.52s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-603834 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-603834 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (260.947753ms)

                                                
                                                
-- stdout --
	* [false-603834] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20107
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1216 12:06:13.408613 1319374 out.go:345] Setting OutFile to fd 1 ...
	I1216 12:06:13.408858 1319374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:06:13.408883 1319374 out.go:358] Setting ErrFile to fd 2...
	I1216 12:06:13.408902 1319374 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1216 12:06:13.409190 1319374 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20107-1132549/.minikube/bin
	I1216 12:06:13.409669 1319374 out.go:352] Setting JSON to false
	I1216 12:06:13.410902 1319374 start.go:129] hostinfo: {"hostname":"ip-172-31-21-244","uptime":31719,"bootTime":1734319055,"procs":200,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"da8ac1fd-6236-412a-a346-95873c98230d"}
	I1216 12:06:13.411004 1319374 start.go:139] virtualization:  
	I1216 12:06:13.416787 1319374 out.go:177] * [false-603834] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1216 12:06:13.420141 1319374 notify.go:220] Checking for updates...
	I1216 12:06:13.424742 1319374 out.go:177]   - MINIKUBE_LOCATION=20107
	I1216 12:06:13.428284 1319374 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1216 12:06:13.431883 1319374 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20107-1132549/kubeconfig
	I1216 12:06:13.434829 1319374 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20107-1132549/.minikube
	I1216 12:06:13.437764 1319374 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1216 12:06:13.440713 1319374 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1216 12:06:13.444261 1319374 config.go:182] Loaded profile config "force-systemd-flag-632755": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1216 12:06:13.444365 1319374 driver.go:394] Setting default libvirt URI to qemu:///system
	I1216 12:06:13.484526 1319374 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1216 12:06:13.484691 1319374 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1216 12:06:13.574215 1319374 info.go:266] docker info: {ID:5FDH:SA5P:5GCT:NLAS:B73P:SGDQ:PBG5:UBVH:UZY3:RXGO:CI7S:WAIH Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:33 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-16 12:06:13.565176628 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-21-244 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1216 12:06:13.574323 1319374 docker.go:318] overlay module found
	I1216 12:06:13.578848 1319374 out.go:177] * Using the docker driver based on user configuration
	I1216 12:06:13.581747 1319374 start.go:297] selected driver: docker
	I1216 12:06:13.581772 1319374 start.go:901] validating driver "docker" against <nil>
	I1216 12:06:13.581787 1319374 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1216 12:06:13.586746 1319374 out.go:201] 
	W1216 12:06:13.589575 1319374 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1216 12:06:13.592349 1319374 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-603834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/20107-1132549/.minikube/ca.crt
extensions:
- extension:
last-update: Mon, 16 Dec 2024 12:06:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: cluster_info
server: https://192.168.85.2:8443
name: force-systemd-flag-632755
contexts:
- context:
cluster: force-systemd-flag-632755
extensions:
- extension:
last-update: Mon, 16 Dec 2024 12:06:16 UTC
provider: minikube.sigs.k8s.io
version: v1.34.0
name: context_info
namespace: default
user: force-systemd-flag-632755
name: force-systemd-flag-632755
current-context: force-systemd-flag-632755
kind: Config
preferences: {}
users:
- name: force-systemd-flag-632755
user:
client-certificate: /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/force-systemd-flag-632755/client.crt
client-key: /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/force-systemd-flag-632755/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-603834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-603834"

                                                
                                                
----------------------- debugLogs end: false-603834 [took: 5.023762572s] --------------------------------
helpers_test.go:175: Cleaning up "false-603834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-603834
--- PASS: TestNetworkPlugins/group/false (5.52s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (166.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-816966 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-816966 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m46.012336687s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (166.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.8s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-816966 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [5b6bd8bb-389f-4ed3-8d1c-828a4000c598] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [5b6bd8bb-389f-4ed3-8d1c-828a4000c598] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.017301587s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-816966 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.80s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (64.81s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-666352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-666352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m4.807335727s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (64.81s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-816966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-816966 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.228786576s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-816966 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (13.95s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-816966 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-816966 --alsologtostderr -v=3: (13.953781593s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (13.95s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-816966 -n old-k8s-version-816966
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-816966 -n old-k8s-version-816966: exit status 7 (93.571947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-816966 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (148.3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-816966 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-816966 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m27.929105199s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-816966 -n old-k8s-version-816966
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (148.30s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-666352 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [aa2e1c23-c56b-4de6-9076-9bfc2eb5fa1d] Pending
helpers_test.go:344: "busybox" [aa2e1c23-c56b-4de6-9076-9bfc2eb5fa1d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [aa2e1c23-c56b-4de6-9076-9bfc2eb5fa1d] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.004463623s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-666352 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.42s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.7s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-666352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-666352 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.526123375s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-666352 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-666352 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-666352 --alsologtostderr -v=3: (12.363616012s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (12.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-666352 -n no-preload-666352
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-666352 -n no-preload-666352: exit status 7 (84.726746ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-666352 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (266.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-666352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 12:12:04.607430 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:12:20.832306 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-666352 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m26.170216127s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-666352 -n no-preload-666352
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (266.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-m4mtz" [d72c68aa-5b51-4884-aff3-5d6b9c20c308] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004003732s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-m4mtz" [d72c68aa-5b51-4884-aff3-5d6b9c20c308] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004733207s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-816966 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-816966 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-816966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-816966 -n old-k8s-version-816966
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-816966 -n old-k8s-version-816966: exit status 2 (338.685072ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-816966 -n old-k8s-version-816966
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-816966 -n old-k8s-version-816966: exit status 2 (332.019956ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-816966 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-816966 -n old-k8s-version-816966
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-816966 -n old-k8s-version-816966
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (55.04s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-551188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-551188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (55.037381404s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (55.04s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-551188 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [592fb2cc-75f5-4b44-85f5-f1573214220c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [592fb2cc-75f5-4b44-85f5-f1573214220c] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.00482932s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-551188 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-551188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-551188 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.092026722s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-551188 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-551188 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-551188 --alsologtostderr -v=3: (11.942342316s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (11.94s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-551188 -n embed-certs-551188
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-551188 -n embed-certs-551188: exit status 7 (76.952525ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-551188 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (265.68s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-551188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 12:15:29.239356 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.246226 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.257593 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.279047 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.320395 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.401831 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.563291 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:29.885131 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:30.527402 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:31.808784 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:34.370197 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:39.491638 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:15:49.733670 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:16:10.215225 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-551188 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m25.303589813s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-551188 -n embed-certs-551188
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (265.68s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mdp2g" [0249bcde-847c-4d9f-8700-c7b2e43c8326] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003774561s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-mdp2g" [0249bcde-847c-4d9f-8700-c7b2e43c8326] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004151738s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-666352 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-666352 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-666352 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-666352 -n no-preload-666352
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-666352 -n no-preload-666352: exit status 2 (328.430252ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-666352 -n no-preload-666352
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-666352 -n no-preload-666352: exit status 2 (322.172837ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-666352 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-666352 -n no-preload-666352
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-666352 -n no-preload-666352
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-682312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 12:16:51.176660 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:17:03.902767 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:17:04.607139 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/functional-300067/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:17:20.831698 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-682312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (51.209457842s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-682312 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [48c57f11-dda4-4548-aee7-830ca32a95e8] Pending
helpers_test.go:344: "busybox" [48c57f11-dda4-4548-aee7-830ca32a95e8] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [48c57f11-dda4-4548-aee7-830ca32a95e8] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 10.0059021s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-682312 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (10.35s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-682312 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-682312 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-682312 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-682312 --alsologtostderr -v=3: (11.955862938s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.2s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312: exit status 7 (74.384618ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-682312 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-682312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 12:18:13.098021 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-682312 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m40.984518147s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (281.52s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gg5bt" [814fc156-6f48-4ef3-84e5-91bb11318365] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003004863s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-gg5bt" [814fc156-6f48-4ef3-84e5-91bb11318365] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004135696s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-551188 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-551188 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.25s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-551188 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-551188 -n embed-certs-551188
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-551188 -n embed-certs-551188: exit status 2 (336.188288ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-551188 -n embed-certs-551188
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-551188 -n embed-certs-551188: exit status 2 (341.145511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-551188 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-551188 -n embed-certs-551188
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-551188 -n embed-certs-551188
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (34.68s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-747887 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-747887 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (34.680696178s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (34.68s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-747887 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-747887 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.086334116s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.09s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-747887 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-747887 --alsologtostderr -v=3: (1.298778382s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-747887 -n newest-cni-747887
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-747887 -n newest-cni-747887: exit status 7 (87.403411ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-747887 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (15.6s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-747887 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1216 12:20:29.238227 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-747887 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (15.15134842s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-747887 -n newest-cni-747887
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (15.60s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-747887 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.30s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-747887 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p newest-cni-747887 --alsologtostderr -v=1: (1.024836619s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-747887 -n newest-cni-747887
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-747887 -n newest-cni-747887: exit status 2 (510.997977ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-747887 -n newest-cni-747887
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-747887 -n newest-cni-747887: exit status 2 (439.786151ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-747887 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-747887 -n newest-cni-747887
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-747887 -n newest-cni-747887
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (54.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
E1216 12:20:56.939923 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/old-k8s-version-816966/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (54.328417648s)
--- PASS: TestNetworkPlugins/group/auto/Start (54.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-603834 "pgrep -a kubelet"
I1216 12:21:35.493820 1137938 config.go:182] Loaded profile config "auto-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-603834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-kjbcg" [1bf0542b-0222-4029-b3f0-44ea593ed231] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 12:21:39.644461 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.651050 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.665081 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.687129 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.728481 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.809836 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:39.971415 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:40.293087 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-kjbcg" [1bf0542b-0222-4029-b3f0-44ea593ed231] Running
E1216 12:21:40.934679 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:42.216425 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:21:44.777742 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004195278s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (56.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1216 12:22:20.622602 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:22:20.832275 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (56.224244591s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (56.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7kbjq" [e1bfcb32-68fb-4625-9637-a64fbc8e7a02] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00437977s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7kbjq" [e1bfcb32-68fb-4625-9637-a64fbc8e7a02] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003980676s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-682312 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.09s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-682312 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-682312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312: exit status 2 (331.275978ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312: exit status 2 (358.703386ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-682312 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-682312 -n default-k8s-diff-port-682312
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (3.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-qhhlc" [4c9f8db2-9170-4124-bebb-6db18172bcbb] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.005049111s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (64.54s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m4.543804607s)
--- PASS: TestNetworkPlugins/group/calico/Start (64.54s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-603834 "pgrep -a kubelet"
I1216 12:23:09.763248 1137938 config.go:182] Loaded profile config "kindnet-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.53s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (11.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-603834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-m8rcs" [65357871-9d66-43d3-b189-41fe324ed073] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-m8rcs" [65357871-9d66-43d3-b189-41fe324ed073] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 11.006151696s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (11.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (67.41s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m7.405225549s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (67.41s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-k48wq" [762584dd-a071-475a-a6a3-8e9785914a5c] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006034992s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-603834 "pgrep -a kubelet"
I1216 12:24:14.505019 1137938 config.go:182] Loaded profile config "calico-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-603834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9fdt8" [0f9aa4a6-e086-4cc2-956a-bc8c2fca7d36] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9fdt8" [0f9aa4a6-e086-4cc2-956a-bc8c2fca7d36] Running
E1216 12:24:23.507291 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005417101s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (49.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (49.771630577s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (49.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-603834 "pgrep -a kubelet"
I1216 12:24:55.302139 1137938 config.go:182] Loaded profile config "custom-flannel-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-603834 replace --force -f testdata/netcat-deployment.yaml
I1216 12:24:55.600939 1137938 kapi.go:136] Waiting for deployment netcat to stabilize, generation 1 observed generation 1 spec.replicas 1 status.replicas 0
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-x594s" [2418efcd-43b4-4551-bc04-7723cf3fd4a8] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-x594s" [2418efcd-43b4-4551-bc04-7723cf3fd4a8] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004939119s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (58.147510891s)
--- PASS: TestNetworkPlugins/group/flannel/Start (58.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-603834 "pgrep -a kubelet"
I1216 12:25:40.673019 1137938 config.go:182] Loaded profile config "enable-default-cni-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (165.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-603834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-64jjm" [a0960482-e754-4bc8-a5fb-4b0c5966ecb4] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-64jjm" [a0960482-e754-4bc8-a5fb-4b0c5966ecb4] Running
E1216 12:28:21.649313 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:23.721509 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 2m45.006432867s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (165.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-tjqcp" [6c6a4837-01d1-47c9-99d7-5e8322c985da] Running
E1216 12:26:35.779792 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:35.786214 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:35.797744 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:35.819207 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:35.860582 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:35.942198 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:36.104320 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:36.425943 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.00425651s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-603834 "pgrep -a kubelet"
I1216 12:26:36.924497 1137938 config.go:182] Loaded profile config "flannel-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.27s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-603834 replace --force -f testdata/netcat-deployment.yaml
E1216 12:26:37.067640 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2c2p8" [85e0bc9e-2289-408f-8b31-1e065d937fe1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1216 12:26:38.349530 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:39.645370 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/no-preload-666352/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:26:40.911501 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-2c2p8" [85e0bc9e-2289-408f-8b31-1e065d937fe1] Running
E1216 12:26:46.032898 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003871541s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (80.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1216 12:27:16.756320 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:20.832111 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/addons-467441/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.672619 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.679272 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.691274 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.712686 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.754067 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.835887 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:40.997301 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:41.319472 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:41.961045 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:43.242674 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:45.804221 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:50.926136 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:27:57.717995 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/auto-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:01.167491 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/default-k8s-diff-port-682312/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.227079 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.233576 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.245119 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.266604 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.308061 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.389604 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.551180 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:03.872705 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:04.514805 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:05.796329 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:08.357951 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
E1216 12:28:13.480163 1137938 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20107-1132549/.minikube/profiles/kindnet-603834/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-603834 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (1m20.439391638s)
--- PASS: TestNetworkPlugins/group/bridge/Start (80.44s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-603834 "pgrep -a kubelet"
I1216 12:28:30.731420 1137938 config.go:182] Loaded profile config "bridge-603834": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-603834 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hk9jb" [dfa927d6-530f-40a6-afbf-0bb8ea2b8630] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hk9jb" [dfa927d6-530f-40a6-afbf-0bb8ea2b8630] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.008428908s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-603834 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-603834 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.20s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.58s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-168069 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-168069" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-168069
--- SKIP: TestDownloadOnlyKic (0.58s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.32s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-467441 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.32s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-090900" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-090900
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (5.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-603834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-603834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-603834"

                                                
                                                
----------------------- debugLogs end: kubenet-603834 [took: 5.657537018s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-603834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-603834
--- SKIP: TestNetworkPlugins/group/kubenet (5.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-603834 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-603834" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-603834

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-603834" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-603834"

                                                
                                                
----------------------- debugLogs end: cilium-603834 [took: 5.011234527s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-603834" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-603834
--- SKIP: TestNetworkPlugins/group/cilium (5.20s)

                                                
                                    
Copied to clipboard