Test Report: Docker_Linux_crio_arm64 20090

                    
                      20ecd3658b86897ae797acf630cebadf77816c63:2024-12-13:37470
                    
                

Test fail (2/330)

Order failed test Duration
36 TestAddons/parallel/Ingress 152.84
38 TestAddons/parallel/MetricsServer 305.61
x
+
TestAddons/parallel/Ingress (152.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-248098 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-248098 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-248098 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [daa948b1-3e15-4f13-8b0d-ec8e9c2f7546] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [daa948b1-3e15-4f13-8b0d-ec8e9c2f7546] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004987725s
I1213 19:23:39.397954  602199 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-arm64 -p addons-248098 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m10.235320918s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-248098 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.49.2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/Ingress]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-248098
helpers_test.go:235: (dbg) docker inspect addons-248098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776",
	        "Created": "2024-12-13T19:18:52.315159725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 603478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-13T19:18:52.484535042Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7cd263f59e19eeefdb79b99186c433854c2243e3d7fa2988b2d817cac7fc54f8",
	        "ResolvConfPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/hostname",
	        "HostsPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/hosts",
	        "LogPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776-json.log",
	        "Name": "/addons-248098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-248098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-248098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4-init/diff:/var/lib/docker/overlay2/7f60ef155cdf2fdd139012aca07bc58fe52fb18f995aec2de9b3156cc93a5c4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-248098",
	                "Source": "/var/lib/docker/volumes/addons-248098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-248098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-248098",
	                "name.minikube.sigs.k8s.io": "addons-248098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70c555dc0bf616658c39517ca754bbc8d0217eecb668e8d418b78ab6f8b69a36",
	            "SandboxKey": "/var/run/docker/netns/70c555dc0bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-248098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "26d751c067fc6e1d561dda56dbfe217bd324778a2878c8a088bc311c8b3eb10d",
	                    "EndpointID": "01de513d376697fd43bead3e31bc9770fd3b8196e20a57de45d46140386899ce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-248098",
	                        "71118ff07ec6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-248098 -n addons-248098
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 logs -n 25: (1.600483378s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-307056                                                                     | download-only-307056   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| delete  | -p download-only-161886                                                                     | download-only-161886   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| start   | --download-only -p                                                                          | download-docker-972085 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | download-docker-972085                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-972085                                                                   | download-docker-972085 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-356185   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | binary-mirror-356185                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34457                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-356185                                                                     | binary-mirror-356185   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| addons  | disable dashboard -p                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | addons-248098                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | addons-248098                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-248098 --wait=true                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | -p addons-248098                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-248098 ip                                                                            | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-248098 ssh cat                                                                       | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | /opt/local-path-provisioner/pvc-3a3ae2c7-94c0-4b5c-a99c-675901123adf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-248098 ssh curl -s                                                                   | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-248098 ip                                                                            | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:25 UTC | 13 Dec 24 19:25 UTC |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:18:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:18:27.055781  602969 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:18:27.056002  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:27.056033  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:18:27.056058  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:27.056425  602969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:18:27.057083  602969 out.go:352] Setting JSON to false
	I1213 19:18:27.058049  602969 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10823,"bootTime":1734106684,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:18:27.058208  602969 start.go:139] virtualization:  
	I1213 19:18:27.061235  602969 out.go:177] * [addons-248098] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1213 19:18:27.064472  602969 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:18:27.064499  602969 notify.go:220] Checking for updates...
	I1213 19:18:27.068685  602969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:18:27.070671  602969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:18:27.073273  602969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:18:27.075308  602969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:18:27.077562  602969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:18:27.080283  602969 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:18:27.115987  602969 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:18:27.116107  602969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:27.170408  602969 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:27.161534867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:27.170522  602969 docker.go:318] overlay module found
	I1213 19:18:27.172847  602969 out.go:177] * Using the docker driver based on user configuration
	I1213 19:18:27.175263  602969 start.go:297] selected driver: docker
	I1213 19:18:27.175290  602969 start.go:901] validating driver "docker" against <nil>
	I1213 19:18:27.175322  602969 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:18:27.176042  602969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:27.234146  602969 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:27.225155419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:27.234392  602969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:18:27.234624  602969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:18:27.236939  602969 out.go:177] * Using Docker driver with root privileges
	I1213 19:18:27.239129  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:18:27.239209  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:27.239231  602969 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:18:27.239319  602969 start.go:340] cluster config:
	{Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:27.243510  602969 out.go:177] * Starting "addons-248098" primary control-plane node in "addons-248098" cluster
	I1213 19:18:27.245521  602969 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:18:27.247755  602969 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:18:27.249819  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:27.249905  602969 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:18:27.249904  602969 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:18:27.249919  602969 cache.go:56] Caching tarball of preloaded images
	I1213 19:18:27.250084  602969 preload.go:172] Found /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:18:27.250176  602969 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:18:27.250719  602969 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json ...
	I1213 19:18:27.250768  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json: {Name:mk4985bbfdf21426c540bab4f5039b3f705d29dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:27.266401  602969 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:18:27.266532  602969 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:18:27.266554  602969 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:18:27.266559  602969 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:18:27.266567  602969 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:18:27.266572  602969 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from local cache
	I1213 19:18:45.145346  602969 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from cached tarball
	I1213 19:18:45.145398  602969 cache.go:194] Successfully downloaded all kic artifacts
	I1213 19:18:45.145432  602969 start.go:360] acquireMachinesLock for addons-248098: {Name:mk90cd79b2d7e9671af7af8749755f35a5159dc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:18:45.147734  602969 start.go:364] duration metric: took 2.261167ms to acquireMachinesLock for "addons-248098"
	I1213 19:18:45.147808  602969 start.go:93] Provisioning new machine with config: &{Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:18:45.147954  602969 start.go:125] createHost starting for "" (driver="docker")
	I1213 19:18:45.160055  602969 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1213 19:18:45.160417  602969 start.go:159] libmachine.API.Create for "addons-248098" (driver="docker")
	I1213 19:18:45.160457  602969 client.go:168] LocalClient.Create starting
	I1213 19:18:45.160603  602969 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem
	I1213 19:18:45.524939  602969 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem
	I1213 19:18:45.865688  602969 cli_runner.go:164] Run: docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 19:18:45.887665  602969 cli_runner.go:211] docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 19:18:45.887747  602969 network_create.go:284] running [docker network inspect addons-248098] to gather additional debugging logs...
	I1213 19:18:45.887768  602969 cli_runner.go:164] Run: docker network inspect addons-248098
	W1213 19:18:45.903678  602969 cli_runner.go:211] docker network inspect addons-248098 returned with exit code 1
	I1213 19:18:45.903718  602969 network_create.go:287] error running [docker network inspect addons-248098]: docker network inspect addons-248098: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-248098 not found
	I1213 19:18:45.903730  602969 network_create.go:289] output of [docker network inspect addons-248098]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-248098 not found
	
	** /stderr **
	I1213 19:18:45.903836  602969 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:18:45.920427  602969 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ced260}
	I1213 19:18:45.920475  602969 network_create.go:124] attempt to create docker network addons-248098 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 19:18:45.920541  602969 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-248098 addons-248098
	I1213 19:18:45.995385  602969 network_create.go:108] docker network addons-248098 192.168.49.0/24 created
	I1213 19:18:45.995429  602969 kic.go:121] calculated static IP "192.168.49.2" for the "addons-248098" container
	I1213 19:18:45.995509  602969 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 19:18:46.022595  602969 cli_runner.go:164] Run: docker volume create addons-248098 --label name.minikube.sigs.k8s.io=addons-248098 --label created_by.minikube.sigs.k8s.io=true
	I1213 19:18:46.040850  602969 oci.go:103] Successfully created a docker volume addons-248098
	I1213 19:18:46.040947  602969 cli_runner.go:164] Run: docker run --rm --name addons-248098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --entrypoint /usr/bin/test -v addons-248098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib
	I1213 19:18:48.146229  602969 cli_runner.go:217] Completed: docker run --rm --name addons-248098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --entrypoint /usr/bin/test -v addons-248098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib: (2.10523956s)
	I1213 19:18:48.146288  602969 oci.go:107] Successfully prepared a docker volume addons-248098
	I1213 19:18:48.146330  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:48.146358  602969 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 19:18:48.146436  602969 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-248098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 19:18:52.242035  602969 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-248098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.095557848s)
	I1213 19:18:52.242067  602969 kic.go:203] duration metric: took 4.095714766s to extract preloaded images to volume ...
	W1213 19:18:52.242215  602969 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 19:18:52.242404  602969 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 19:18:52.300006  602969 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-248098 --name addons-248098 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-248098 --network addons-248098 --ip 192.168.49.2 --volume addons-248098:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9
	I1213 19:18:52.684589  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Running}}
	I1213 19:18:52.708745  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:52.731796  602969 cli_runner.go:164] Run: docker exec addons-248098 stat /var/lib/dpkg/alternatives/iptables
	I1213 19:18:52.783069  602969 oci.go:144] the created container "addons-248098" has a running status.
	I1213 19:18:52.783097  602969 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa...
	I1213 19:18:53.755040  602969 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 19:18:53.776445  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:53.796248  602969 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 19:18:53.796269  602969 kic_runner.go:114] Args: [docker exec --privileged addons-248098 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 19:18:53.850724  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:53.868534  602969 machine.go:93] provisionDockerMachine start ...
	I1213 19:18:53.868627  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:53.888235  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:53.888513  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:53.888529  602969 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 19:18:54.034254  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-248098
	
	I1213 19:18:54.034303  602969 ubuntu.go:169] provisioning hostname "addons-248098"
	I1213 19:18:54.034373  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.054913  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:54.055181  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:54.055206  602969 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-248098 && echo "addons-248098" | sudo tee /etc/hostname
	I1213 19:18:54.214170  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-248098
	
	I1213 19:18:54.214261  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.231476  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:54.231736  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:54.231760  602969 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-248098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-248098/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-248098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:18:54.378633  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:18:54.378661  602969 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20090-596807/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-596807/.minikube}
	I1213 19:18:54.378688  602969 ubuntu.go:177] setting up certificates
	I1213 19:18:54.378698  602969 provision.go:84] configureAuth start
	I1213 19:18:54.378769  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:54.395597  602969 provision.go:143] copyHostCerts
	I1213 19:18:54.395681  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/ca.pem (1082 bytes)
	I1213 19:18:54.395809  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/cert.pem (1123 bytes)
	I1213 19:18:54.395898  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/key.pem (1679 bytes)
	I1213 19:18:54.395967  602969 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem org=jenkins.addons-248098 san=[127.0.0.1 192.168.49.2 addons-248098 localhost minikube]
	I1213 19:18:54.809899  602969 provision.go:177] copyRemoteCerts
	I1213 19:18:54.809970  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:18:54.810013  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.827762  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:54.931460  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:18:54.956121  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:18:54.980178  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:18:55.013876  602969 provision.go:87] duration metric: took 635.158103ms to configureAuth
	I1213 19:18:55.013918  602969 ubuntu.go:193] setting minikube options for container-runtime
	I1213 19:18:55.014153  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:18:55.014302  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.071101  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:55.071374  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:55.071398  602969 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:18:55.329830  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:18:55.329852  602969 machine.go:96] duration metric: took 1.461297288s to provisionDockerMachine
	I1213 19:18:55.329863  602969 client.go:171] duration metric: took 10.169398436s to LocalClient.Create
	I1213 19:18:55.329883  602969 start.go:167] duration metric: took 10.169469633s to libmachine.API.Create "addons-248098"
	I1213 19:18:55.329891  602969 start.go:293] postStartSetup for "addons-248098" (driver="docker")
	I1213 19:18:55.329901  602969 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:18:55.329970  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:18:55.330017  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.347094  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.447547  602969 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:18:55.450755  602969 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:18:55.450790  602969 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 19:18:55.450804  602969 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 19:18:55.450812  602969 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 19:18:55.450823  602969 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-596807/.minikube/addons for local assets ...
	I1213 19:18:55.450895  602969 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-596807/.minikube/files for local assets ...
	I1213 19:18:55.450920  602969 start.go:296] duration metric: took 121.02358ms for postStartSetup
	I1213 19:18:55.451245  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:55.467710  602969 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json ...
	I1213 19:18:55.468009  602969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:18:55.468062  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.485705  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.583312  602969 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:18:55.588009  602969 start.go:128] duration metric: took 10.4400343s to createHost
	I1213 19:18:55.588034  602969 start.go:83] releasing machines lock for "addons-248098", held for 10.440259337s
	I1213 19:18:55.588121  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:55.604873  602969 ssh_runner.go:195] Run: cat /version.json
	I1213 19:18:55.604925  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.605175  602969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:18:55.605236  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.625747  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.641932  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.865572  602969 ssh_runner.go:195] Run: systemctl --version
	I1213 19:18:55.869608  602969 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:18:56.023964  602969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 19:18:56.028708  602969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:18:56.050673  602969 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 19:18:56.050757  602969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:18:56.083729  602969 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 19:18:56.083750  602969 start.go:495] detecting cgroup driver to use...
	I1213 19:18:56.083785  602969 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:18:56.083835  602969 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:18:56.099746  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:18:56.111443  602969 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:18:56.111553  602969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:18:56.125763  602969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:18:56.140789  602969 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:18:56.237908  602969 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:18:56.325624  602969 docker.go:233] disabling docker service ...
	I1213 19:18:56.325743  602969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:18:56.346209  602969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:18:56.359581  602969 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:18:56.451957  602969 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:18:56.550085  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:18:56.563345  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:18:56.581145  602969 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:18:56.581234  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.592261  602969 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:18:56.592350  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.603099  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.613956  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.624912  602969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:18:56.634235  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.644471  602969 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.660595  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.670646  602969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:18:56.679462  602969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:18:56.688378  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:18:56.766966  602969 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:18:56.886569  602969 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:18:56.886719  602969 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:18:56.890716  602969 start.go:563] Will wait 60s for crictl version
	I1213 19:18:56.890833  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:18:56.894217  602969 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:18:56.932502  602969 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 19:18:56.932611  602969 ssh_runner.go:195] Run: crio --version
	I1213 19:18:56.971355  602969 ssh_runner.go:195] Run: crio --version
	I1213 19:18:57.021813  602969 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1213 19:18:57.024214  602969 cli_runner.go:164] Run: docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:18:57.042490  602969 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:18:57.046587  602969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:18:57.059450  602969 kubeadm.go:883] updating cluster {Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:18:57.059574  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:57.059643  602969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:18:57.137675  602969 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:18:57.137698  602969 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:18:57.137761  602969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:18:57.173787  602969 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:18:57.173812  602969 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:18:57.173820  602969 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1213 19:18:57.173921  602969 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-248098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:18:57.174003  602969 ssh_runner.go:195] Run: crio config
	I1213 19:18:57.222356  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:18:57.222379  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:57.222389  602969 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:18:57.222411  602969 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-248098 NodeName:addons-248098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:18:57.222539  602969 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-248098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:18:57.222611  602969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:18:57.231575  602969 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:18:57.231687  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:18:57.240565  602969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:18:57.259184  602969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:18:57.277514  602969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1213 19:18:57.297096  602969 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:18:57.300762  602969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:18:57.311998  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:18:57.393593  602969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:18:57.407504  602969 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098 for IP: 192.168.49.2
	I1213 19:18:57.407530  602969 certs.go:194] generating shared ca certs ...
	I1213 19:18:57.407547  602969 certs.go:226] acquiring lock for ca certs: {Name:mk3cdd0ea94f7f906448b193b6df25da3e2261b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.407685  602969 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key
	I1213 19:18:57.753657  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt ...
	I1213 19:18:57.753689  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt: {Name:mkd47ec227d5a0a992364ca75af37df461bf8251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.754556  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key ...
	I1213 19:18:57.754574  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key: {Name:mk99e7ab436fef1f7051dabcc331ea2d120ce21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.754673  602969 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key
	I1213 19:18:57.965859  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt ...
	I1213 19:18:57.965891  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt: {Name:mkd3882d2ccf5bff7977b8f91ec4b985ade96ca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.966508  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key ...
	I1213 19:18:57.966527  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key: {Name:mk9f1e77620da4f62399f28c89e1e49e6502ff2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.966625  602969 certs.go:256] generating profile certs ...
	I1213 19:18:57.966697  602969 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key
	I1213 19:18:57.966723  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt with IP's: []
	I1213 19:18:58.272499  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt ...
	I1213 19:18:58.272535  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: {Name:mk65d52d2f3cffee39c58a204c5c86169e26beed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.273970  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key ...
	I1213 19:18:58.273989  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key: {Name:mk7cf318e896508552eb82f0ebadb2445f7082e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.274084  602969 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425
	I1213 19:18:58.274106  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 19:18:58.651536  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 ...
	I1213 19:18:58.651567  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425: {Name:mke53ea42652e58e64dcdd4b89ef7f4a4a14f85c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.652283  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425 ...
	I1213 19:18:58.652304  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425: {Name:mke16db69a70a4e768d2fcef5a36f02309bb7b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.652951  602969 certs.go:381] copying /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 -> /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt
	I1213 19:18:58.653040  602969 certs.go:385] copying /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425 -> /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key
	I1213 19:18:58.653091  602969 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key
	I1213 19:18:58.653112  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt with IP's: []
	I1213 19:18:58.926757  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt ...
	I1213 19:18:58.926786  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt: {Name:mkd3bdca2f1c30fa6d033d08e64b97c34b1ee90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.927544  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key ...
	I1213 19:18:58.927566  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key: {Name:mk13a5ea1b680a0acc1fb9a90733ee1b8d555e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.927773  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:18:58.927819  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:18:58.927848  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:18:58.927877  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem (1679 bytes)
	I1213 19:18:58.928547  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:18:58.976754  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:18:59.020377  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:18:59.047812  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:18:59.073635  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:18:59.099274  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:18:59.124829  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:18:59.150551  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 19:18:59.175603  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:18:59.200255  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:18:59.218487  602969 ssh_runner.go:195] Run: openssl version
	I1213 19:18:59.224085  602969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:18:59.233761  602969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.237304  602969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.237375  602969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.244920  602969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:18:59.254323  602969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:18:59.257610  602969 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:18:59.257675  602969 kubeadm.go:392] StartCluster: {Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:59.257768  602969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:18:59.257862  602969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:18:59.301228  602969 cri.go:89] found id: ""
	I1213 19:18:59.301305  602969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:18:59.310353  602969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:18:59.319841  602969 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:18:59.319904  602969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:18:59.328815  602969 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:18:59.328839  602969 kubeadm.go:157] found existing configuration files:
	
	I1213 19:18:59.328891  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:18:59.338594  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:18:59.338663  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:18:59.347744  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:18:59.356911  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:18:59.356991  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:18:59.365506  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:18:59.374420  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:18:59.374491  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:18:59.383136  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:18:59.392580  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:18:59.392655  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:18:59.401105  602969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:18:59.449629  602969 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:18:59.449991  602969 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:18:59.470150  602969 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:18:59.470313  602969 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1213 19:18:59.470373  602969 kubeadm.go:310] OS: Linux
	I1213 19:18:59.470453  602969 kubeadm.go:310] CGROUPS_CPU: enabled
	I1213 19:18:59.470524  602969 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1213 19:18:59.470597  602969 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1213 19:18:59.470667  602969 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1213 19:18:59.470745  602969 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1213 19:18:59.470813  602969 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1213 19:18:59.470888  602969 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1213 19:18:59.470957  602969 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1213 19:18:59.471051  602969 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1213 19:18:59.528975  602969 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:18:59.529091  602969 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:18:59.529189  602969 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:18:59.536172  602969 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:18:59.540072  602969 out.go:235]   - Generating certificates and keys ...
	I1213 19:18:59.540200  602969 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:18:59.540286  602969 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:19:00.246678  602969 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:19:00.785838  602969 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:19:01.636131  602969 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:19:02.024791  602969 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:19:02.790385  602969 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:19:02.790765  602969 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-248098 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:19:03.407514  602969 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:19:03.407674  602969 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-248098 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:19:04.222280  602969 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:19:04.641177  602969 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:19:05.202907  602969 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:19:05.203140  602969 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:19:06.009479  602969 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:19:06.181840  602969 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:19:07.103019  602969 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:19:07.437209  602969 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:19:08.145533  602969 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:19:08.146133  602969 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:19:08.151058  602969 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:19:08.153615  602969 out.go:235]   - Booting up control plane ...
	I1213 19:19:08.153730  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:19:08.153813  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:19:08.154897  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:19:08.164962  602969 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:19:08.172281  602969 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:19:08.172339  602969 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:19:08.257882  602969 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:19:08.258008  602969 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:19:09.259532  602969 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001649368s
	I1213 19:19:09.259630  602969 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:19:15.761647  602969 kubeadm.go:310] [api-check] The API server is healthy after 6.502180938s
	I1213 19:19:15.781085  602969 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:19:15.796568  602969 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:19:15.828071  602969 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:19:15.828279  602969 kubeadm.go:310] [mark-control-plane] Marking the node addons-248098 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:19:15.843567  602969 kubeadm.go:310] [bootstrap-token] Using token: j5o3j6.zgtne4vwby5cxh24
	I1213 19:19:15.845663  602969 out.go:235]   - Configuring RBAC rules ...
	I1213 19:19:15.845800  602969 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:19:15.851702  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:19:15.859463  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:19:15.863643  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:19:15.867638  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:19:15.872771  602969 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:19:16.168591  602969 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:19:16.629547  602969 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:19:17.174865  602969 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:19:17.174896  602969 kubeadm.go:310] 
	I1213 19:19:17.174968  602969 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:19:17.174973  602969 kubeadm.go:310] 
	I1213 19:19:17.175100  602969 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:19:17.175112  602969 kubeadm.go:310] 
	I1213 19:19:17.175138  602969 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:19:17.175211  602969 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:19:17.175304  602969 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:19:17.175319  602969 kubeadm.go:310] 
	I1213 19:19:17.175382  602969 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:19:17.175388  602969 kubeadm.go:310] 
	I1213 19:19:17.175454  602969 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:19:17.175463  602969 kubeadm.go:310] 
	I1213 19:19:17.175520  602969 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:19:17.175628  602969 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:19:17.175704  602969 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:19:17.175709  602969 kubeadm.go:310] 
	I1213 19:19:17.175822  602969 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:19:17.175932  602969 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:19:17.175945  602969 kubeadm.go:310] 
	I1213 19:19:17.176058  602969 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j5o3j6.zgtne4vwby5cxh24 \
	I1213 19:19:17.176186  602969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3a4ff1c2a595792db2f2ca4f26d9011086ca3d6e4619c022e611d1580ec6ebd4 \
	I1213 19:19:17.176222  602969 kubeadm.go:310] 	--control-plane 
	I1213 19:19:17.176233  602969 kubeadm.go:310] 
	I1213 19:19:17.176328  602969 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:19:17.176337  602969 kubeadm.go:310] 
	I1213 19:19:17.176420  602969 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j5o3j6.zgtne4vwby5cxh24 \
	I1213 19:19:17.176556  602969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3a4ff1c2a595792db2f2ca4f26d9011086ca3d6e4619c022e611d1580ec6ebd4 
	I1213 19:19:17.176811  602969 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1213 19:19:17.176946  602969 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:19:17.176973  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:19:17.176982  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:19:17.180500  602969 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1213 19:19:17.182504  602969 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 19:19:17.186376  602969 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1213 19:19:17.186397  602969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 19:19:17.205884  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 19:19:17.486806  602969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:19:17.486941  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:17.487025  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-248098 minikube.k8s.io/updated_at=2024_12_13T19_19_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-248098 minikube.k8s.io/primary=true
	I1213 19:19:17.495888  602969 ops.go:34] apiserver oom_adj: -16
	I1213 19:19:17.641903  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:18.141946  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:18.642947  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:19.142076  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:19.642844  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:20.141993  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:20.642024  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:21.142539  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:21.228991  602969 kubeadm.go:1113] duration metric: took 3.742096798s to wait for elevateKubeSystemPrivileges
	I1213 19:19:21.229030  602969 kubeadm.go:394] duration metric: took 21.971375826s to StartCluster
	I1213 19:19:21.229051  602969 settings.go:142] acquiring lock: {Name:mka9b7535bd979f27733ffa8cb9f79579fa32ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:19:21.229190  602969 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:19:21.229583  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/kubeconfig: {Name:mka5435b4dfc150b8392bc985a52cf22d376e8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:19:21.230376  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:19:21.230408  602969 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:19:21.230640  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:19:21.230676  602969 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:19:21.230746  602969 addons.go:69] Setting yakd=true in profile "addons-248098"
	I1213 19:19:21.230759  602969 addons.go:234] Setting addon yakd=true in "addons-248098"
	I1213 19:19:21.230782  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.231257  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.231623  602969 addons.go:69] Setting inspektor-gadget=true in profile "addons-248098"
	I1213 19:19:21.231651  602969 addons.go:234] Setting addon inspektor-gadget=true in "addons-248098"
	I1213 19:19:21.231687  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.232164  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.232320  602969 addons.go:69] Setting metrics-server=true in profile "addons-248098"
	I1213 19:19:21.232341  602969 addons.go:234] Setting addon metrics-server=true in "addons-248098"
	I1213 19:19:21.232366  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.232771  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.233276  602969 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-248098"
	I1213 19:19:21.233302  602969 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-248098"
	I1213 19:19:21.233330  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.233747  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.235082  602969 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-248098"
	I1213 19:19:21.235115  602969 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-248098"
	I1213 19:19:21.235145  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.235594  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.236644  602969 addons.go:69] Setting registry=true in profile "addons-248098"
	I1213 19:19:21.236672  602969 addons.go:234] Setting addon registry=true in "addons-248098"
	I1213 19:19:21.236702  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.237136  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.240546  602969 addons.go:69] Setting cloud-spanner=true in profile "addons-248098"
	I1213 19:19:21.240607  602969 addons.go:234] Setting addon cloud-spanner=true in "addons-248098"
	I1213 19:19:21.240646  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.241341  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.255273  602969 addons.go:69] Setting storage-provisioner=true in profile "addons-248098"
	I1213 19:19:21.255307  602969 addons.go:234] Setting addon storage-provisioner=true in "addons-248098"
	I1213 19:19:21.255344  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.255819  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.255999  602969 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-248098"
	I1213 19:19:21.256040  602969 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-248098"
	I1213 19:19:21.256063  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.265178  602969 addons.go:69] Setting default-storageclass=true in profile "addons-248098"
	I1213 19:19:21.265277  602969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-248098"
	I1213 19:19:21.266138  602969 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-248098"
	I1213 19:19:21.266224  602969 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-248098"
	I1213 19:19:21.266617  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.266936  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.280307  602969 addons.go:69] Setting gcp-auth=true in profile "addons-248098"
	I1213 19:19:21.284706  602969 mustload.go:65] Loading cluster: addons-248098
	I1213 19:19:21.284944  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:19:21.285251  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.290520  602969 addons.go:69] Setting volcano=true in profile "addons-248098"
	I1213 19:19:21.290615  602969 addons.go:234] Setting addon volcano=true in "addons-248098"
	I1213 19:19:21.290692  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.291333  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.299441  602969 addons.go:69] Setting ingress=true in profile "addons-248098"
	I1213 19:19:21.299520  602969 addons.go:234] Setting addon ingress=true in "addons-248098"
	I1213 19:19:21.299632  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.300592  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.310583  602969 addons.go:69] Setting volumesnapshots=true in profile "addons-248098"
	I1213 19:19:21.310623  602969 addons.go:234] Setting addon volumesnapshots=true in "addons-248098"
	I1213 19:19:21.310661  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.311152  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.322127  602969 addons.go:69] Setting ingress-dns=true in profile "addons-248098"
	I1213 19:19:21.322221  602969 addons.go:234] Setting addon ingress-dns=true in "addons-248098"
	I1213 19:19:21.322359  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.322982  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.325448  602969 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:19:21.328554  602969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:19:21.328581  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:19:21.328649  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.333576  602969 out.go:177] * Verifying Kubernetes components...
	I1213 19:19:21.353965  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.380274  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:19:21.412609  602969 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:19:21.420870  602969 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:19:21.420936  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:19:21.421040  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.438711  602969 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:19:21.439164  602969 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:19:21.441663  602969 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:19:21.442042  602969 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:19:21.447223  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:19:21.448274  602969 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:19:21.448331  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:19:21.448420  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.459755  602969 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:19:21.462611  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:19:21.462680  602969 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:19:21.462784  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.463994  602969 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:19:21.464049  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:19:21.464133  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.482413  602969 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:19:21.482669  602969 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:19:21.482683  602969 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:19:21.482759  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.498395  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:19:21.503674  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:21.508350  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:21.508597  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:19:21.508617  602969 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:19:21.508684  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.483797  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.495853  602969 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-248098"
	I1213 19:19:21.510702  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.511144  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	W1213 19:19:21.523070  602969 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:19:21.495897  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:19:21.525493  602969 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:19:21.525560  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.531786  602969 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:19:21.534286  602969 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:19:21.534309  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:19:21.534383  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.546429  602969 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:19:21.548949  602969 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:19:21.548974  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:19:21.549037  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.570396  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.577265  602969 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:19:21.577286  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:19:21.577348  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.591406  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:19:21.593808  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:19:21.595784  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:19:21.603024  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:19:21.605198  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:19:21.607893  602969 addons.go:234] Setting addon default-storageclass=true in "addons-248098"
	I1213 19:19:21.607928  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.608339  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.610620  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.613014  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:19:21.615144  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:19:21.615254  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.623841  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:19:21.630322  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:19:21.630356  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:19:21.630446  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.643406  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.715448  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.716808  602969 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:19:21.719377  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.720893  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.722077  602969 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:19:21.724551  602969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:19:21.724577  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:19:21.724644  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.786959  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.806260  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.807337  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.813786  602969 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:19:21.813806  602969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:19:21.813880  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.815114  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.831472  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.837880  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.856326  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:19:21.869818  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.923870  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:19:22.001811  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:19:22.058636  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:19:22.058671  602969 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:19:22.180538  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:19:22.180560  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:19:22.187506  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:19:22.187588  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:19:22.221301  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:19:22.221406  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:19:22.245357  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:19:22.251332  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:19:22.251408  602969 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:19:22.272828  602969 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:19:22.272865  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:19:22.287633  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:19:22.292810  602969 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:19:22.292893  602969 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:19:22.302161  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:19:22.302249  602969 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:19:22.308889  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:19:22.308959  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:19:22.367800  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:19:22.367888  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:19:22.381548  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:19:22.418969  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:19:22.427226  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:19:22.427295  602969 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:19:22.460529  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:19:22.487312  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:19:22.487419  602969 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:19:22.492906  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:19:22.492991  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:19:22.502366  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:19:22.513766  602969 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:19:22.513843  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:19:22.550994  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:19:22.551082  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:19:22.576625  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:19:22.622394  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:19:22.622461  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:19:22.667331  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:19:22.667409  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:19:22.686529  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:19:22.727043  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:19:22.727128  602969 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:19:22.730868  602969 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.500445437s)
	I1213 19:19:22.730988  602969 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.350672368s)
	I1213 19:19:22.731177  602969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:19:22.731216  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:19:22.814545  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:19:22.814622  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:19:22.866479  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:19:22.952807  602969 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:22.952888  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:19:23.027535  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:19:23.027616  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:19:23.096867  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:23.140904  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:19:23.140974  602969 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:19:23.213097  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:19:23.213164  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:19:23.284483  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:19:23.284559  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:19:23.313960  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:19:23.314030  602969 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:19:23.403457  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:19:26.482868  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.558964552s)
	I1213 19:19:26.482971  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.481077701s)
	I1213 19:19:26.483045  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.237614796s)
	I1213 19:19:26.483100  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.195445671s)
	I1213 19:19:26.483176  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626823049s)
	I1213 19:19:26.653244  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.271604446s)
	I1213 19:19:26.653524  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.234460463s)
	W1213 19:19:26.730600  602969 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 19:19:28.119649  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.659025174s)
	I1213 19:19:28.119679  602969 addons.go:475] Verifying addon ingress=true in "addons-248098"
	I1213 19:19:28.119926  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.617462835s)
	I1213 19:19:28.119999  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.543296224s)
	I1213 19:19:28.120008  602969 addons.go:475] Verifying addon metrics-server=true in "addons-248098"
	I1213 19:19:28.120034  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.433440279s)
	I1213 19:19:28.120042  602969 addons.go:475] Verifying addon registry=true in "addons-248098"
	I1213 19:19:28.120316  602969 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.389064075s)
	I1213 19:19:28.120347  602969 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 19:19:28.121375  602969 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.390177239s)
	I1213 19:19:28.122131  602969 node_ready.go:35] waiting up to 6m0s for node "addons-248098" to be "Ready" ...
	I1213 19:19:28.122346  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.255778687s)
	I1213 19:19:28.122682  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.025719663s)
	W1213 19:19:28.122720  602969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:19:28.122738  602969 retry.go:31] will retry after 368.887977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:19:28.122848  602969 out.go:177] * Verifying ingress addon...
	I1213 19:19:28.122940  602969 out.go:177] * Verifying registry addon...
	I1213 19:19:28.125945  602969 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-248098 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:19:28.126854  602969 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:19:28.127952  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:19:28.178514  602969 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:19:28.178607  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:28.181483  602969 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:19:28.181566  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:28.491896  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:28.653877  602969 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-248098" context rescaled to 1 replicas
	I1213 19:19:28.656611  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:28.656806  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:28.999336  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.595772838s)
	I1213 19:19:28.999422  602969 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-248098"
	I1213 19:19:29.004093  602969 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:19:29.007703  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:19:29.027443  602969 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:19:29.027466  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:29.141297  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:29.142221  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:29.512286  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:29.634043  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:29.635325  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:30.018614  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:30.126763  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:30.141230  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:30.144333  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:30.511781  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:30.631473  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:30.632040  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.013033  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:31.131738  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:31.132589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.512185  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:31.631006  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:31.631908  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.862556  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:19:31.862644  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:31.881197  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:31.993912  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:19:32.015203  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:32.029154  602969 addons.go:234] Setting addon gcp-auth=true in "addons-248098"
	I1213 19:19:32.029264  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:32.029785  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:32.059441  602969 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:19:32.059508  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:32.078992  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:32.131417  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:32.131771  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:32.132258  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:32.192934  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:32.195358  602969 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:19:32.197626  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:19:32.197657  602969 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:19:32.216983  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:19:32.217006  602969 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:19:32.235808  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:19:32.235833  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:19:32.255597  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:19:32.513233  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:32.636785  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:32.637301  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:32.775691  602969 addons.go:475] Verifying addon gcp-auth=true in "addons-248098"
	I1213 19:19:32.779925  602969 out.go:177] * Verifying gcp-auth addon...
	I1213 19:19:32.785024  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:19:32.816723  602969 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:19:32.816751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:33.018884  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:33.131804  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:33.133704  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:33.289115  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:33.511882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:33.631923  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:33.632439  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:33.788543  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:34.012574  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:34.131105  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:34.131521  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:34.288721  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:34.511845  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:34.626878  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:34.631341  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:34.633312  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:34.789417  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:35.015921  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:35.131195  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:35.131570  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:35.289725  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:35.511913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:35.631394  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:35.632893  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:35.788325  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:36.012571  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:36.131216  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:36.132275  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:36.288541  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:36.512198  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:36.631774  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:36.633266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:36.788674  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:37.014562  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:37.125705  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:37.131666  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:37.132381  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:37.289036  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:37.512079  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:37.631486  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:37.632552  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:37.788860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:38.013292  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:38.131696  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:38.132255  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:38.288818  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:38.511431  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:38.631017  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:38.631919  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:38.789005  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:39.013018  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:39.125807  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:39.131912  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:39.132342  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:39.288913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:39.546103  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:39.634124  602969 node_ready.go:49] node "addons-248098" has status "Ready":"True"
	I1213 19:19:39.634153  602969 node_ready.go:38] duration metric: took 11.511992619s for node "addons-248098" to be "Ready" ...
	I1213 19:19:39.634164  602969 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:19:39.648967  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:39.653911  602969 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:39.656285  602969 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:19:39.656313  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:39.871358  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:40.068754  602969 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:19:40.068783  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:40.175155  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:40.176729  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:40.324497  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:40.514084  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:40.631680  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:40.632493  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:40.794699  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:41.015953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:41.132088  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:41.132663  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:41.160673  602969 pod_ready.go:93] pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.160700  602969 pod_ready.go:82] duration metric: took 1.506750951s for pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.160728  602969 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.167909  602969 pod_ready.go:93] pod "etcd-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.167936  602969 pod_ready.go:82] duration metric: took 7.198218ms for pod "etcd-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.167950  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.175836  602969 pod_ready.go:93] pod "kube-apiserver-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.175861  602969 pod_ready.go:82] duration metric: took 7.896877ms for pod "kube-apiserver-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.175876  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.188852  602969 pod_ready.go:93] pod "kube-controller-manager-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.188879  602969 pod_ready.go:82] duration metric: took 12.994611ms for pod "kube-controller-manager-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.188894  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcbrb" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.232429  602969 pod_ready.go:93] pod "kube-proxy-rcbrb" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.232451  602969 pod_ready.go:82] duration metric: took 43.55018ms for pod "kube-proxy-rcbrb" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.232462  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.289689  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:41.513507  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:41.626175  602969 pod_ready.go:93] pod "kube-scheduler-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.626347  602969 pod_ready.go:82] duration metric: took 393.875067ms for pod "kube-scheduler-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.626366  602969 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.635558  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:41.637958  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:41.788507  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:42.026734  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:42.137519  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:42.139399  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:42.289568  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:42.515967  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:42.649268  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:42.651606  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:42.789418  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:43.014152  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:43.133188  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:43.134981  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:43.290443  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:43.512663  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:43.632511  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:43.634455  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:43.635299  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:43.789909  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:44.014063  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:44.133266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:44.134645  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:44.288648  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:44.512023  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:44.644352  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:44.646135  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:44.792681  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:45.023186  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:45.149208  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:45.150477  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:45.291751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:45.512953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:45.637673  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:45.638589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:45.640894  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:45.796714  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:46.016964  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:46.143385  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:46.145482  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:46.289284  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:46.512969  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:46.637122  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:46.641525  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:46.788321  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:47.016793  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:47.139112  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:47.141348  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:47.288749  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:47.513840  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:47.633496  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:47.637674  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:47.789373  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:48.015282  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:48.133087  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:48.134694  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:48.136494  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:48.289117  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:48.512513  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:48.653344  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:48.660119  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:48.790039  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:49.014997  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:49.144771  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:49.148704  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:49.289337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:49.512915  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:49.637487  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:49.637748  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:49.795640  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:50.033774  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:50.144261  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:50.144566  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:50.146679  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:50.288390  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:50.514008  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:50.648275  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:50.649898  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:50.788546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:51.014168  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:51.134498  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:51.135723  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:51.293382  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:51.514369  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:51.637001  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:51.639183  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:51.789516  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:52.018454  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:52.132634  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:52.134129  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:52.289744  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:52.512798  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:52.647245  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:52.648787  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:52.653226  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:52.788935  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:53.014878  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:53.135610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:53.138325  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:53.289605  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:53.513675  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:53.633387  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:53.636095  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:53.788718  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:54.020268  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:54.132212  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:54.132739  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:54.288627  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:54.513437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:54.639017  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:54.639361  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:54.788600  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:55.019780  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:55.135485  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:55.136975  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:55.143199  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:55.288914  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:55.514550  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:55.633239  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:55.634475  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:55.789007  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:56.017594  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:56.133967  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:56.134589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:56.288756  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:56.512643  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:56.632760  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:56.635279  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:56.788398  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:57.013827  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:57.132172  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:57.133877  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:57.288213  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:57.513044  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:57.633465  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:57.634971  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:57.638124  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:57.789388  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:58.013941  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:58.133369  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:58.134953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:58.289323  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:58.513659  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:58.633687  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:58.636209  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:58.789326  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:59.013883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:59.155798  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:59.156038  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:59.288594  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:59.512330  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:59.638783  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:59.640044  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:59.640781  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:59.790902  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:00.070081  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:00.156179  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:00.166602  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:00.316992  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:00.527241  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:00.676815  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:00.712768  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:00.847142  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:01.020393  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:01.152526  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:01.167867  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:01.289311  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:01.512968  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:01.634331  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:01.637221  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:01.789851  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:02.029159  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:02.137616  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:02.148018  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:02.150699  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:02.289957  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:02.521936  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:02.635028  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:02.640003  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:02.791133  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:03.015522  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:03.132739  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:03.133141  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:03.290347  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:03.512827  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:03.635478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:03.636481  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:03.790544  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:04.014368  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:04.135801  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:04.137715  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:04.289648  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:04.512571  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:04.640686  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:04.642543  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:04.644580  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:04.812287  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:05.044546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:05.136402  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:05.143417  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:05.289020  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:05.514026  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:05.638751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:05.640261  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:05.793173  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:06.015932  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:06.133981  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:06.135052  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:06.288936  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:06.513436  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:06.633907  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:06.635179  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:06.789073  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:07.015510  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:07.136087  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:07.136169  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:07.139564  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:07.289141  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:07.513160  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:07.633094  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:07.634880  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:07.790913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:08.014398  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:08.137505  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:08.140139  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:08.289634  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:08.513350  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:08.632550  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:08.634494  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:08.788437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:09.016508  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:09.139393  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:09.141910  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:09.155680  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:09.289744  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:09.514429  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:09.634616  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:09.635300  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:09.793078  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:10.026954  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:10.132998  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:10.134060  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:10.289024  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:10.513687  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:10.633539  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:10.634473  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:10.788547  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:11.014898  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:11.135610  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:11.138976  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:11.288860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:11.513364  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:11.632145  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:11.633274  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:11.633863  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:11.789034  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:12.023861  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:12.139256  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:12.140575  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:12.289201  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:12.517252  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:12.633790  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:12.635523  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:12.789376  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:13.016912  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:13.139113  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:13.141565  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:13.288941  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:13.518018  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:13.632009  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:13.634862  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:13.634895  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:13.788636  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:14.018063  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:14.135813  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:14.136337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:14.289217  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:14.513284  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:14.633631  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:14.635788  602969 kapi.go:107] duration metric: took 46.507833164s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:20:14.789381  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:15.029174  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:15.141655  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:15.289034  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:15.513478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:15.633942  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:15.789331  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:16.015948  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:16.136565  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:16.145005  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:16.288915  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:16.513700  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:16.636525  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:16.789727  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:17.014867  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:17.137578  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:17.289841  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:17.513784  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:17.633105  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:17.789484  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:18.022624  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:18.140248  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:18.288599  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:18.513057  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:18.632575  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:18.640037  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:18.789504  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:19.022468  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:19.133212  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:19.289041  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:19.513478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:19.632476  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:19.789055  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:20.020624  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:20.143687  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:20.290244  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:20.513839  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:20.633022  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:20.789103  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:21.030957  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:21.137112  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:21.139362  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:21.288758  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:21.513599  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:21.634029  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:21.789807  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:22.017041  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:22.133097  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:22.304824  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:22.513267  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:22.633313  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:22.788346  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:23.017822  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:23.131759  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:23.289610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:23.513266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:23.636653  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:23.638376  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:23.789193  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:24.020867  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:24.135162  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:24.290797  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:24.521966  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:24.642084  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:24.788736  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:25.015145  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:25.135198  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:25.289972  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:25.513394  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:25.634048  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:25.789882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:26.014546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:26.136141  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:26.146013  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:26.293030  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:26.518439  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:26.634951  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:26.792262  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:27.013288  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:27.132968  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:27.289216  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:27.514447  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:27.632103  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:27.789207  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:28.017253  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:28.145210  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:28.289132  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:28.520569  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:28.635024  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:28.635875  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:28.789278  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:29.014206  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:29.139383  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:29.288706  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:29.513883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:29.634664  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:29.791618  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:30.020353  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:30.139727  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:30.288465  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:30.515253  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:30.635961  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:30.637639  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:30.789132  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:31.015337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:31.134245  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:31.289615  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:31.514090  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:31.633246  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:31.789185  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:32.015503  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:32.131557  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:32.289416  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:32.518303  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:32.640843  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:32.643531  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:32.789628  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:33.018720  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:33.134361  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:33.288610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:33.512882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:33.633328  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:33.789183  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:34.013826  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:34.133333  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:34.288683  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:34.513055  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:34.638394  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:34.789863  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:35.018056  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:35.134152  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:35.134873  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:35.288402  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:35.514464  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:35.641587  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:35.790447  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:36.034212  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:36.136737  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:36.288750  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:36.514366  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:36.638336  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:36.800939  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:37.025466  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:37.137327  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:37.138588  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:37.289196  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:37.512883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:37.652347  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:37.791008  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:38.014218  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:38.134012  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:38.300160  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:38.523258  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:38.647934  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:38.804079  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:39.066002  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:39.214225  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:39.228628  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:39.293700  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:39.513347  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:39.632635  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:39.793893  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:40.034033  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:40.142980  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:40.290171  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:40.512997  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:40.637209  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:40.789419  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:41.015513  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:41.134702  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:41.292884  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:41.513085  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:41.639119  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:41.645783  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:41.790097  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:42.015437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:42.133077  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:42.289343  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:42.513468  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:42.634784  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:42.789568  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:43.024691  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:43.134400  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:43.291754  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:43.516872  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:43.635803  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:43.790316  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:44.015891  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:44.133459  602969 kapi.go:107] duration metric: took 1m16.006613767s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:20:44.136564  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:44.289895  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:44.512860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:44.789321  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:45.099748  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:45.304635  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:45.513890  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:45.789078  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:46.017143  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:46.289593  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:46.512358  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:46.634782  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:46.789287  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:47.013249  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:47.289667  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:47.513740  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:47.789566  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:48.033765  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:48.292904  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:48.515082  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:48.636139  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:48.789523  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:49.013959  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:49.290131  602969 kapi.go:107] duration metric: took 1m16.505104895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:20:49.292413  602969 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-248098 cluster.
	I1213 19:20:49.294959  602969 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:20:49.297342  602969 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:20:49.512561  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.051929  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.159213  602969 pod_ready.go:93] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"True"
	I1213 19:20:50.159240  602969 pod_ready.go:82] duration metric: took 1m8.532866479s for pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.159254  602969 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.185362  602969 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace has status "Ready":"True"
	I1213 19:20:50.185387  602969 pod_ready.go:82] duration metric: took 26.125113ms for pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.185410  602969 pod_ready.go:39] duration metric: took 1m10.551212061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:20:50.185430  602969 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:20:50.185462  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:20:50.185531  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:20:50.294525  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:20:50.294549  602969 cri.go:89] found id: ""
	I1213 19:20:50.294557  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:20:50.294618  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.304608  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:20:50.304682  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:20:50.362212  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:20:50.362235  602969 cri.go:89] found id: ""
	I1213 19:20:50.362243  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:20:50.362329  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.366049  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:20:50.366120  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:20:50.470834  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:20:50.470859  602969 cri.go:89] found id: ""
	I1213 19:20:50.470867  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:20:50.470921  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.503447  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:20:50.503522  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:20:50.517428  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.605090  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:20:50.605116  602969 cri.go:89] found id: ""
	I1213 19:20:50.605134  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:20:50.605196  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.610821  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:20:50.610898  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:20:50.690567  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:20:50.690647  602969 cri.go:89] found id: ""
	I1213 19:20:50.690662  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:20:50.690732  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.695050  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:20:50.695158  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:20:50.741497  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:20:50.741522  602969 cri.go:89] found id: ""
	I1213 19:20:50.741531  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:20:50.741591  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.745570  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:20:50.745648  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:20:50.791676  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:20:50.791699  602969 cri.go:89] found id: ""
	I1213 19:20:50.791707  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:20:50.791768  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.802647  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:20:50.802675  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:20:50.885177  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:20:50.885587  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:20:50.911543  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:20:50.911946  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:20:50.975179  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:20:50.975311  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:20:51.015960  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:51.218135  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:20:51.218208  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:20:51.297330  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:20:51.297371  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:20:51.364315  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:20:51.364352  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:20:51.419594  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:20:51.419625  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:20:51.513154  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:51.528485  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:20:51.528569  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:20:51.546548  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:20:51.546579  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:20:51.597667  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:20:51.597699  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:20:51.650955  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:20:51.651038  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:20:51.747175  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:20:51.747210  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:20:51.795094  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:20:51.795127  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:20:51.851701  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:20:51.851732  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:20:51.851816  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:20:51.851832  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:20:51.851839  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:20:51.851850  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:20:51.851875  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:20:51.851882  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:20:51.851888  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:20:52.014612  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:52.515749  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:53.014362  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:53.513704  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:54.014702  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:54.513764  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:55.015201  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:55.584150  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:56.014613  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:56.513381  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:57.013544  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:57.514445  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:58.026351  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:58.514303  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:59.012890  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:59.512881  602969 kapi.go:107] duration metric: took 1m30.505185227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:20:59.515617  602969 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 19:20:59.518027  602969 addons.go:510] duration metric: took 1m38.287350638s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 19:21:01.853210  602969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:21:01.867505  602969 api_server.go:72] duration metric: took 1m40.637063007s to wait for apiserver process to appear ...
	I1213 19:21:01.867534  602969 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:21:01.868050  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:21:01.868129  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:21:01.908108  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:01.908133  602969 cri.go:89] found id: ""
	I1213 19:21:01.908141  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:21:01.908199  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.912369  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:21:01.912453  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:21:01.952191  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:01.952214  602969 cri.go:89] found id: ""
	I1213 19:21:01.952223  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:21:01.952279  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.955874  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:21:01.955949  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:21:01.995630  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:01.995655  602969 cri.go:89] found id: ""
	I1213 19:21:01.995663  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:21:01.995723  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.999503  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:21:01.999589  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:21:02.046099  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:02.046123  602969 cri.go:89] found id: ""
	I1213 19:21:02.046131  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:21:02.046193  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.050255  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:21:02.050412  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:21:02.092267  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:02.092292  602969 cri.go:89] found id: ""
	I1213 19:21:02.092300  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:21:02.092389  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.096421  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:21:02.096586  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:21:02.137435  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:02.137511  602969 cri.go:89] found id: ""
	I1213 19:21:02.137535  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:21:02.137622  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.141668  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:21:02.141786  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:21:02.183547  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:02.183576  602969 cri.go:89] found id: ""
	I1213 19:21:02.183585  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:21:02.183701  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.188103  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:21:02.188132  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:02.241298  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:21:02.241332  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:02.289740  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:21:02.289774  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:21:02.345674  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.345950  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:02.361436  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.361671  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:02.401354  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:21:02.401382  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:21:02.419350  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:21:02.419387  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:21:02.572424  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:21:02.572457  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:02.641829  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:21:02.641866  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:21:02.736850  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:21:02.736888  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:21:02.788016  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:21:02.788052  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:02.830967  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:21:02.830999  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:02.869566  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:21:02.869595  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:02.939984  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:21:02.940026  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:02.992970  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:02.993000  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:21:02.993059  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:21:02.993200  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.993218  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:02.993230  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.993247  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:02.993261  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:02.993268  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:21:12.994733  602969 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 19:21:13.008029  602969 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 19:21:13.011610  602969 api_server.go:141] control plane version: v1.31.2
	I1213 19:21:13.011651  602969 api_server.go:131] duration metric: took 11.144109173s to wait for apiserver health ...
	I1213 19:21:13.011662  602969 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:21:13.011689  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:21:13.011757  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:21:13.055983  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:13.056004  602969 cri.go:89] found id: ""
	I1213 19:21:13.056012  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:21:13.056076  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.060197  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:21:13.060272  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:21:13.114407  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:13.114431  602969 cri.go:89] found id: ""
	I1213 19:21:13.114438  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:21:13.114500  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.118390  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:21:13.118525  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:21:13.162684  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:13.162717  602969 cri.go:89] found id: ""
	I1213 19:21:13.162726  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:21:13.162789  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.166866  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:21:13.166956  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:21:13.220934  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:13.220980  602969 cri.go:89] found id: ""
	I1213 19:21:13.220989  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:21:13.221090  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.228707  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:21:13.228829  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:21:13.289311  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:13.289342  602969 cri.go:89] found id: ""
	I1213 19:21:13.289352  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:21:13.289424  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.294609  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:21:13.294728  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:21:13.366508  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:13.366557  602969 cri.go:89] found id: ""
	I1213 19:21:13.366567  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:21:13.366656  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.372576  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:21:13.372670  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:21:13.416348  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:13.416425  602969 cri.go:89] found id: ""
	I1213 19:21:13.416449  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:21:13.416529  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.420352  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:21:13.420391  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:13.474383  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:21:13.474428  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:13.522419  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:21:13.522452  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:21:13.617192  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:21:13.617230  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:21:13.634890  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:21:13.634918  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:21:13.784911  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:21:13.784948  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:13.828683  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:21:13.828720  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:13.868810  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:21:13.868851  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:13.966616  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:21:13.966653  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:21:14.036058  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:21:14.036098  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:21:14.100263  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.100509  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:14.115979  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.116213  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:14.156559  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:21:14.156587  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:14.211656  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:21:14.211697  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:14.260185  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:14.260216  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:21:14.260273  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:21:14.260311  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.260318  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:14.260325  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.260332  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:14.260337  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:14.260346  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:21:24.272192  602969 system_pods.go:59] 18 kube-system pods found
	I1213 19:21:24.272235  602969 system_pods.go:61] "coredns-7c65d6cfc9-bt6ls" [23b8e6b9-51eb-4a14-bee8-7eacdb154832] Running
	I1213 19:21:24.272242  602969 system_pods.go:61] "csi-hostpath-attacher-0" [98592c8c-f15c-40c5-831b-2239874143ea] Running
	I1213 19:21:24.272247  602969 system_pods.go:61] "csi-hostpath-resizer-0" [14cdb963-4eb9-4472-8a01-549e09a55047] Running
	I1213 19:21:24.272255  602969 system_pods.go:61] "csi-hostpathplugin-l2fk7" [30df306a-dc88-4eb0-aa19-d35529eda401] Running
	I1213 19:21:24.272260  602969 system_pods.go:61] "etcd-addons-248098" [014814e1-1087-4331-aeb4-7fd59c3165e5] Running
	I1213 19:21:24.272264  602969 system_pods.go:61] "kindnet-n9pvh" [7e6398f0-53e1-4774-bdd6-211a800d8291] Running
	I1213 19:21:24.272268  602969 system_pods.go:61] "kube-apiserver-addons-248098" [a3e569f6-6078-4dc0-a3b2-764a0180614c] Running
	I1213 19:21:24.272273  602969 system_pods.go:61] "kube-controller-manager-addons-248098" [b6473627-2b96-431a-9082-99576908ad11] Running
	I1213 19:21:24.272284  602969 system_pods.go:61] "kube-ingress-dns-minikube" [53321af4-b841-467d-af38-89b82188ff1d] Running
	I1213 19:21:24.272289  602969 system_pods.go:61] "kube-proxy-rcbrb" [fb396ab8-720d-41c3-9d2b-d1b2fb666b0b] Running
	I1213 19:21:24.272296  602969 system_pods.go:61] "kube-scheduler-addons-248098" [ac75ce0f-098a-4f6d-9e98-697f3b89e854] Running
	I1213 19:21:24.272300  602969 system_pods.go:61] "metrics-server-84c5f94fbc-g7jcr" [a41f7493-f390-4111-9ecf-6b9c91d88986] Running
	I1213 19:21:24.272305  602969 system_pods.go:61] "nvidia-device-plugin-daemonset-xsrsn" [bfc935e3-d013-494e-8380-5b4be1f7a0c9] Running
	I1213 19:21:24.272312  602969 system_pods.go:61] "registry-5cc95cd69-5n4c9" [7ec0f719-ff86-4cc0-9868-18a171b8d618] Running
	I1213 19:21:24.272316  602969 system_pods.go:61] "registry-proxy-nvc8d" [c14eabdb-94a1-4ed0-8a97-51210e96f13a] Running
	I1213 19:21:24.272321  602969 system_pods.go:61] "snapshot-controller-56fcc65765-ltsx9" [7191195a-2231-4fe5-9bf3-ba875b3ceeb5] Running
	I1213 19:21:24.272335  602969 system_pods.go:61] "snapshot-controller-56fcc65765-sqhl4" [a11c4e23-9e52-4164-b6f3-f29f74154fab] Running
	I1213 19:21:24.272339  602969 system_pods.go:61] "storage-provisioner" [1d273a3f-36bb-4847-ad88-3544cda8cde5] Running
	I1213 19:21:24.272344  602969 system_pods.go:74] duration metric: took 11.260676188s to wait for pod list to return data ...
	I1213 19:21:24.272356  602969 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:21:24.275135  602969 default_sa.go:45] found service account: "default"
	I1213 19:21:24.275162  602969 default_sa.go:55] duration metric: took 2.799619ms for default service account to be created ...
	I1213 19:21:24.275172  602969 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:21:24.286490  602969 system_pods.go:86] 18 kube-system pods found
	I1213 19:21:24.286530  602969 system_pods.go:89] "coredns-7c65d6cfc9-bt6ls" [23b8e6b9-51eb-4a14-bee8-7eacdb154832] Running
	I1213 19:21:24.286539  602969 system_pods.go:89] "csi-hostpath-attacher-0" [98592c8c-f15c-40c5-831b-2239874143ea] Running
	I1213 19:21:24.286544  602969 system_pods.go:89] "csi-hostpath-resizer-0" [14cdb963-4eb9-4472-8a01-549e09a55047] Running
	I1213 19:21:24.286550  602969 system_pods.go:89] "csi-hostpathplugin-l2fk7" [30df306a-dc88-4eb0-aa19-d35529eda401] Running
	I1213 19:21:24.286555  602969 system_pods.go:89] "etcd-addons-248098" [014814e1-1087-4331-aeb4-7fd59c3165e5] Running
	I1213 19:21:24.286560  602969 system_pods.go:89] "kindnet-n9pvh" [7e6398f0-53e1-4774-bdd6-211a800d8291] Running
	I1213 19:21:24.286565  602969 system_pods.go:89] "kube-apiserver-addons-248098" [a3e569f6-6078-4dc0-a3b2-764a0180614c] Running
	I1213 19:21:24.286570  602969 system_pods.go:89] "kube-controller-manager-addons-248098" [b6473627-2b96-431a-9082-99576908ad11] Running
	I1213 19:21:24.286574  602969 system_pods.go:89] "kube-ingress-dns-minikube" [53321af4-b841-467d-af38-89b82188ff1d] Running
	I1213 19:21:24.286579  602969 system_pods.go:89] "kube-proxy-rcbrb" [fb396ab8-720d-41c3-9d2b-d1b2fb666b0b] Running
	I1213 19:21:24.286583  602969 system_pods.go:89] "kube-scheduler-addons-248098" [ac75ce0f-098a-4f6d-9e98-697f3b89e854] Running
	I1213 19:21:24.286588  602969 system_pods.go:89] "metrics-server-84c5f94fbc-g7jcr" [a41f7493-f390-4111-9ecf-6b9c91d88986] Running
	I1213 19:21:24.286591  602969 system_pods.go:89] "nvidia-device-plugin-daemonset-xsrsn" [bfc935e3-d013-494e-8380-5b4be1f7a0c9] Running
	I1213 19:21:24.286595  602969 system_pods.go:89] "registry-5cc95cd69-5n4c9" [7ec0f719-ff86-4cc0-9868-18a171b8d618] Running
	I1213 19:21:24.286599  602969 system_pods.go:89] "registry-proxy-nvc8d" [c14eabdb-94a1-4ed0-8a97-51210e96f13a] Running
	I1213 19:21:24.286603  602969 system_pods.go:89] "snapshot-controller-56fcc65765-ltsx9" [7191195a-2231-4fe5-9bf3-ba875b3ceeb5] Running
	I1213 19:21:24.286607  602969 system_pods.go:89] "snapshot-controller-56fcc65765-sqhl4" [a11c4e23-9e52-4164-b6f3-f29f74154fab] Running
	I1213 19:21:24.286611  602969 system_pods.go:89] "storage-provisioner" [1d273a3f-36bb-4847-ad88-3544cda8cde5] Running
	I1213 19:21:24.286618  602969 system_pods.go:126] duration metric: took 11.440315ms to wait for k8s-apps to be running ...
	I1213 19:21:24.286645  602969 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:21:24.286737  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:21:24.299683  602969 system_svc.go:56] duration metric: took 13.040573ms WaitForService to wait for kubelet
	I1213 19:21:24.299710  602969 kubeadm.go:582] duration metric: took 2m3.069273573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:21:24.299729  602969 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:21:24.304220  602969 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 19:21:24.304264  602969 node_conditions.go:123] node cpu capacity is 2
	I1213 19:21:24.304277  602969 node_conditions.go:105] duration metric: took 4.542452ms to run NodePressure ...
	I1213 19:21:24.304291  602969 start.go:241] waiting for startup goroutines ...
	I1213 19:21:24.304299  602969 start.go:246] waiting for cluster config update ...
	I1213 19:21:24.304316  602969 start.go:255] writing updated cluster config ...
	I1213 19:21:24.304631  602969 ssh_runner.go:195] Run: rm -f paused
	I1213 19:21:24.730318  602969 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:21:24.735680  602969 out.go:177] * Done! kubectl is now configured to use "addons-248098" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:24:16 addons-248098 crio[989]: time="2024-12-13 19:24:16.954503584Z" level=info msg="Removed pod sandbox: 2db93e8ad44fd6457c70a23a15a6c08dd4000c0f040f289884c6e7bc897ccf63" id=701a29ca-b195-4229-b64f-b49e8feb985d name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.149452955Z" level=info msg="Running pod sandbox: default/hello-world-app-55bf9c44b4-z9wlr/POD" id=0b80c7fb-507a-4fd4-999d-b473e2e18082 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.149518015Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.208541999Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-z9wlr Namespace:default ID:0fc8cd343fdb5dd786eec1281025c81b44fbc34607773c0b4fc784bfbc42df2e UID:14af58da-f64c-47d5-98c4-0b019b2ce7f2 NetNS:/var/run/netns/74b4da3b-468e-4828-a1b4-fef8416cb004 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.208586447Z" level=info msg="Adding pod default_hello-world-app-55bf9c44b4-z9wlr to CNI network \"kindnet\" (type=ptp)"
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.230085173Z" level=info msg="Got pod network &{Name:hello-world-app-55bf9c44b4-z9wlr Namespace:default ID:0fc8cd343fdb5dd786eec1281025c81b44fbc34607773c0b4fc784bfbc42df2e UID:14af58da-f64c-47d5-98c4-0b019b2ce7f2 NetNS:/var/run/netns/74b4da3b-468e-4828-a1b4-fef8416cb004 Networks:[] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.230358587Z" level=info msg="Checking pod default_hello-world-app-55bf9c44b4-z9wlr for CNI network kindnet (type=ptp)"
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.234102175Z" level=info msg="Ran pod sandbox 0fc8cd343fdb5dd786eec1281025c81b44fbc34607773c0b4fc784bfbc42df2e with infra container: default/hello-world-app-55bf9c44b4-z9wlr/POD" id=0b80c7fb-507a-4fd4-999d-b473e2e18082 name=/runtime.v1.RuntimeService/RunPodSandbox
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.235449934Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=789157a9-c175-4f7d-b34a-7c87dcb6b152 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.235685104Z" level=info msg="Image docker.io/kicbase/echo-server:1.0 not found" id=789157a9-c175-4f7d-b34a-7c87dcb6b152 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.237957190Z" level=info msg="Pulling image: docker.io/kicbase/echo-server:1.0" id=ffa1ff09-45d1-4bf9-bb17-c23e415e8251 name=/runtime.v1.ImageService/PullImage
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.240456299Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 19:25:50 addons-248098 crio[989]: time="2024-12-13 19:25:50.523770140Z" level=info msg="Trying to access \"docker.io/kicbase/echo-server:1.0\""
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.310529399Z" level=info msg="Pulled image: docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6" id=ffa1ff09-45d1-4bf9-bb17-c23e415e8251 name=/runtime.v1.ImageService/PullImage
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.311586963Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=80380c2c-8b1f-4fb6-9ebc-b17961ad4379 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.312299308Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=80380c2c-8b1f-4fb6-9ebc-b17961ad4379 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.313428964Z" level=info msg="Checking image status: docker.io/kicbase/echo-server:1.0" id=26d24bab-1ee3-414a-a310-38a0978faf00 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.314137124Z" level=info msg="Image status: &ImageStatusResponse{Image:&Image{Id:ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17,RepoTags:[docker.io/kicbase/echo-server:1.0],RepoDigests:[docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6 docker.io/kicbase/echo-server@sha256:42a89d9b22e5307cb88494990d5d929c401339f508c0a7e98a4d8ac52623fc5b],Size_:4789170,Uid:nil,Username:,Spec:nil,},Info:map[string]string{},}" id=26d24bab-1ee3-414a-a310-38a0978faf00 name=/runtime.v1.ImageService/ImageStatus
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.315582895Z" level=info msg="Creating container: default/hello-world-app-55bf9c44b4-z9wlr/hello-world-app" id=dae04327-0b78-4e44-8900-0bce8551e926 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.315678190Z" level=warning msg="Allowed annotations are specified for workload []"
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.349556749Z" level=warning msg="Failed to open /etc/passwd: open /var/lib/containers/storage/overlay/7b5415019a9546f4dc15ddba13245a5d40d0cf415a3aaa00c98e313f9b292b14/merged/etc/passwd: no such file or directory"
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.349762199Z" level=warning msg="Failed to open /etc/group: open /var/lib/containers/storage/overlay/7b5415019a9546f4dc15ddba13245a5d40d0cf415a3aaa00c98e313f9b292b14/merged/etc/group: no such file or directory"
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.400996448Z" level=info msg="Created container 9d9515c6509ec7bb23852d6aa7f02ef4d2f61cabd27c4f4fc716428b7f281145: default/hello-world-app-55bf9c44b4-z9wlr/hello-world-app" id=dae04327-0b78-4e44-8900-0bce8551e926 name=/runtime.v1.RuntimeService/CreateContainer
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.401533505Z" level=info msg="Starting container: 9d9515c6509ec7bb23852d6aa7f02ef4d2f61cabd27c4f4fc716428b7f281145" id=40c00b08-8282-4650-a059-51b3fdf97b02 name=/runtime.v1.RuntimeService/StartContainer
	Dec 13 19:25:51 addons-248098 crio[989]: time="2024-12-13 19:25:51.408634051Z" level=info msg="Started container" PID=9370 containerID=9d9515c6509ec7bb23852d6aa7f02ef4d2f61cabd27c4f4fc716428b7f281145 description=default/hello-world-app-55bf9c44b4-z9wlr/hello-world-app id=40c00b08-8282-4650-a059-51b3fdf97b02 name=/runtime.v1.RuntimeService/StartContainer sandboxID=0fc8cd343fdb5dd786eec1281025c81b44fbc34607773c0b4fc784bfbc42df2e
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED                  STATE               NAME                      ATTEMPT             POD ID              POD
	9d9515c6509ec       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                        Less than a second ago   Running             hello-world-app           0                   0fc8cd343fdb5       hello-world-app-55bf9c44b4-z9wlr
	b7d7a44eec17b       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                              2 minutes ago            Running             nginx                     0                   168334f3be3f6       nginx
	0ee8eaa9b3f42       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                          4 minutes ago            Running             busybox                   0                   0ae8365e4a516       busybox
	bda0caff014ee       registry.k8s.io/ingress-nginx/controller@sha256:787a5408fa511266888b2e765f9666bee67d9bf2518a6b7cfd4ab6cc01c22eee             5 minutes ago            Running             controller                0                   0ec565a7d84e5       ingress-nginx-controller-5f85ff4588-77ds6
	90583dbe72d4a       gcr.io/k8s-minikube/minikube-ingress-dns@sha256:4211a1de532376c881851542238121b26792225faa36a7b02dccad88fd05797c             5 minutes ago            Running             minikube-ingress-dns      0                   657f4440d1fda       kube-ingress-dns-minikube
	1503028b745d0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98             5 minutes ago            Running             local-path-provisioner    0                   9063ee2cd8175       local-path-provisioner-86d989889c-rgd6q
	e10ba2c21305f       d54655ed3a8543a162b688a24bf969ee1a28d906b8ccb30188059247efdae234                                                             5 minutes ago            Exited              patch                     2                   13679a5079d48       ingress-nginx-admission-patch-7r99g
	999049ad75afc       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:0550b75a965592f1dde3fbeaa98f67a1e10c5a086bcd69a29054cc4edcb56771   6 minutes ago            Exited              create                    0                   0ad06363f4b95       ingress-nginx-admission-create-2fpd2
	eb0c779bf9b1d       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f        6 minutes ago            Running             metrics-server            0                   25e7603213900       metrics-server-84c5f94fbc-g7jcr
	d5719b1b478de       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                             6 minutes ago            Running             coredns                   0                   5c0b264fe641c       coredns-7c65d6cfc9-bt6ls
	0c0704d382a69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                             6 minutes ago            Running             storage-provisioner       0                   d7807360953a9       storage-provisioner
	da25e26a83aad       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                           6 minutes ago            Running             kindnet-cni               0                   96f405480c5da       kindnet-n9pvh
	1449f483df90f       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                             6 minutes ago            Running             kube-proxy                0                   9de7aa20493ea       kube-proxy-rcbrb
	27ee00545a23c       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                             6 minutes ago            Running             kube-apiserver            0                   7082116ed71bc       kube-apiserver-addons-248098
	289abb226f700       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                             6 minutes ago            Running             etcd                      0                   7412a2a5bc972       etcd-addons-248098
	833e3ba74cac9       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                             6 minutes ago            Running             kube-scheduler            0                   249b5349b7b11       kube-scheduler-addons-248098
	4283a1804a94c       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                             6 minutes ago            Running             kube-controller-manager   0                   680c82ba028a7       kube-controller-manager-addons-248098
	
	
	==> coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] <==
	[INFO] 10.244.0.6:47673 - 45497 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 94 false 1232" NXDOMAIN qr,rd,ra 83 0.002309471s
	[INFO] 10.244.0.6:47673 - 31481 "AAAA IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 149 0.000141221s
	[INFO] 10.244.0.6:47673 - 25493 "A IN registry.kube-system.svc.cluster.local. udp 67 false 1232" NOERROR qr,aa,rd 110 0.000103812s
	[INFO] 10.244.0.6:57376 - 41212 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000111632s
	[INFO] 10.244.0.6:57376 - 40727 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.0000934s
	[INFO] 10.244.0.6:58629 - 7071 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000069327s
	[INFO] 10.244.0.6:58629 - 6858 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000047821s
	[INFO] 10.244.0.6:33103 - 15392 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000062811s
	[INFO] 10.244.0.6:33103 - 15212 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000053696s
	[INFO] 10.244.0.6:49297 - 22745 "A IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001473972s
	[INFO] 10.244.0.6:49297 - 22967 "AAAA IN registry.kube-system.svc.cluster.local.us-east-2.compute.internal. udp 83 false 512" NXDOMAIN qr,rd,ra 83 0.001791359s
	[INFO] 10.244.0.6:59233 - 22957 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000117097s
	[INFO] 10.244.0.6:59233 - 23113 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00022118s
	[INFO] 10.244.0.21:38003 - 53913 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000286453s
	[INFO] 10.244.0.21:33082 - 31154 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000283073s
	[INFO] 10.244.0.21:32993 - 56738 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00015041s
	[INFO] 10.244.0.21:52200 - 51436 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000643726s
	[INFO] 10.244.0.21:49264 - 4377 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000156573s
	[INFO] 10.244.0.21:37568 - 38618 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000124761s
	[INFO] 10.244.0.21:43774 - 39369 "AAAA IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.002822955s
	[INFO] 10.244.0.21:58530 - 50634 "A IN storage.googleapis.com.us-east-2.compute.internal. udp 78 false 1232" NXDOMAIN qr,rd,ra 67 0.003320298s
	[INFO] 10.244.0.21:58104 - 29741 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 648 0.008294313s
	[INFO] 10.244.0.21:47436 - 18000 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.008930908s
	[INFO] 10.244.0.24:44861 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000260664s
	[INFO] 10.244.0.24:47366 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000168904s
	
	
	==> describe nodes <==
	Name:               addons-248098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-248098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-248098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_19_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-248098
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:19:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-248098
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:25:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:23:52 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:23:52 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:23:52 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:23:52 +0000   Fri, 13 Dec 2024 19:19:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-248098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0af0368374054463b8b1bd628ee8eb22
	  System UUID:                dce25a95-cc3d-451b-b59c-5c92da6108a0
	  Boot ID:                    8bc558cc-8777-4865-b401-e730957079d4
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (15 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m26s
	  default                     hello-world-app-55bf9c44b4-z9wlr             0 (0%)        0 (0%)      0 (0%)           0 (0%)         2s
	  default                     nginx                                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  ingress-nginx               ingress-nginx-controller-5f85ff4588-77ds6    100m (5%)     0 (0%)      90Mi (1%)        0 (0%)         6m24s
	  kube-system                 coredns-7c65d6cfc9-bt6ls                     100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     6m28s
	  kube-system                 etcd-addons-248098                           100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         6m35s
	  kube-system                 kindnet-n9pvh                                100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      6m29s
	  kube-system                 kube-apiserver-addons-248098                 250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-controller-manager-addons-248098        200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 kube-ingress-dns-minikube                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  kube-system                 kube-proxy-rcbrb                             0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m28s
	  kube-system                 kube-scheduler-addons-248098                 100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m35s
	  kube-system                 metrics-server-84c5f94fbc-g7jcr              100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         6m25s
	  kube-system                 storage-provisioner                          0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	  local-path-storage          local-path-provisioner-86d989889c-rgd6q      0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                1050m (52%)  100m (5%)
	  memory             510Mi (6%)   220Mi (2%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-1Gi      0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	  hugepages-32Mi     0 (0%)       0 (0%)
	  hugepages-64Ki     0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 6m23s                  kube-proxy       
	  Normal   Starting                 6m35s                  kubelet          Starting kubelet.
	  Warning  CgroupV1                 6m35s                  kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  6m35s (x2 over 6m35s)  kubelet          Node addons-248098 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    6m35s (x2 over 6m35s)  kubelet          Node addons-248098 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     6m35s (x2 over 6m35s)  kubelet          Node addons-248098 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           6m31s                  node-controller  Node addons-248098 event: Registered Node addons-248098 in Controller
	  Normal   NodeReady                6m12s                  kubelet          Node addons-248098 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] <==
	{"level":"info","ts":"2024-12-13T19:19:10.590717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-13T19:19:10.591625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-13T19:19:21.946468Z","caller":"traceutil/trace.go:171","msg":"trace[936921410] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"127.534977ms","start":"2024-12-13T19:19:21.818902Z","end":"2024-12-13T19:19:21.946437Z","steps":["trace[936921410] 'process raft request'  (duration: 43.075222ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.355014Z","caller":"traceutil/trace.go:171","msg":"trace[1311423016] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"138.771034ms","start":"2024-12-13T19:19:22.216225Z","end":"2024-12-13T19:19:22.354996Z","steps":["trace[1311423016] 'process raft request'  (duration: 138.610383ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.401520Z","caller":"traceutil/trace.go:171","msg":"trace[1286345005] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"185.164389ms","start":"2024-12-13T19:19:22.216333Z","end":"2024-12-13T19:19:22.401497Z","steps":["trace[1286345005] 'process raft request'  (duration: 138.615208ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.417210Z","caller":"traceutil/trace.go:171","msg":"trace[822680764] linearizableReadLoop","detail":"{readStateIndex:326; appliedIndex:325; }","duration":"200.919964ms","start":"2024-12-13T19:19:22.216276Z","end":"2024-12-13T19:19:22.417196Z","steps":["trace[822680764] 'read index received'  (duration: 103.548881ms)","trace[822680764] 'applied index is now lower than readState.Index'  (duration: 97.370534ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:19:22.417327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.029357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-13T19:19:22.502403Z","caller":"traceutil/trace.go:171","msg":"trace[322309030] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:320; }","duration":"286.109173ms","start":"2024-12-13T19:19:22.216272Z","end":"2024-12-13T19:19:22.502382Z","steps":["trace[322309030] 'agreement among raft nodes before linearized reading'  (duration: 200.970115ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.417422Z","caller":"traceutil/trace.go:171","msg":"trace[1489321000] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"162.451632ms","start":"2024-12-13T19:19:22.254964Z","end":"2024-12-13T19:19:22.417415Z","steps":["trace[1489321000] 'process raft request'  (duration: 162.137149ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.571107Z","caller":"traceutil/trace.go:171","msg":"trace[2079234850] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"250.729686ms","start":"2024-12-13T19:19:22.320359Z","end":"2024-12-13T19:19:22.571089Z","steps":["trace[2079234850] 'process raft request'  (duration: 209.514181ms)","trace[2079234850] 'compare'  (duration: 37.338718ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:19:22.579490Z","caller":"traceutil/trace.go:171","msg":"trace[1110984546] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"258.994085ms","start":"2024-12-13T19:19:22.320480Z","end":"2024-12-13T19:19:22.579474Z","steps":["trace[1110984546] 'process raft request'  (duration: 250.342153ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:19:22.628459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.003647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-12-13T19:19:22.649678Z","caller":"traceutil/trace.go:171","msg":"trace[627535555] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:322; }","duration":"329.22598ms","start":"2024-12-13T19:19:22.320426Z","end":"2024-12-13T19:19:22.649652Z","steps":["trace[627535555] 'agreement among raft nodes before linearized reading'  (duration: 279.7766ms)","trace[627535555] 'range keys from bolt db'  (duration: 25.603758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:19:22.649954Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:19:22.320406Z","time spent":"329.514526ms","remote":"127.0.0.1:37482","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-12-13T19:19:22.650627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.164536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:19:22.656078Z","caller":"traceutil/trace.go:171","msg":"trace[92943196] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:322; }","duration":"335.61055ms","start":"2024-12-13T19:19:22.320447Z","end":"2024-12-13T19:19:22.656058Z","steps":["trace[92943196] 'agreement among raft nodes before linearized reading'  (duration: 330.139543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:19:22.656260Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:19:22.320436Z","time spent":"335.796201ms","remote":"127.0.0.1:37498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":24,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces\" limit:1 "}
	{"level":"info","ts":"2024-12-13T19:19:22.715634Z","caller":"traceutil/trace.go:171","msg":"trace[529049175] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"154.336798ms","start":"2024-12-13T19:19:22.561281Z","end":"2024-12-13T19:19:22.715618Z","steps":["trace[529049175] 'process raft request'  (duration: 154.083543ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571373Z","caller":"traceutil/trace.go:171","msg":"trace[806318250] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"106.250175ms","start":"2024-12-13T19:20:55.465107Z","end":"2024-12-13T19:20:55.571357Z","steps":["trace[806318250] 'process raft request'  (duration: 106.12251ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571507Z","caller":"traceutil/trace.go:171","msg":"trace[1413756354] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"106.292761ms","start":"2024-12-13T19:20:55.465208Z","end":"2024-12-13T19:20:55.571501Z","steps":["trace[1413756354] 'process raft request'  (duration: 106.0575ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571631Z","caller":"traceutil/trace.go:171","msg":"trace[984580659] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"105.507881ms","start":"2024-12-13T19:20:55.466116Z","end":"2024-12-13T19:20:55.571624Z","steps":["trace[984580659] 'process raft request'  (duration: 105.172968ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571726Z","caller":"traceutil/trace.go:171","msg":"trace[1743161593] linearizableReadLoop","detail":"{readStateIndex:1262; appliedIndex:1257; }","duration":"102.474618ms","start":"2024-12-13T19:20:55.469245Z","end":"2024-12-13T19:20:55.571719Z","steps":["trace[1743161593] 'read index received'  (duration: 15.995395ms)","trace[1743161593] 'applied index is now lower than readState.Index'  (duration: 86.478632ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:20:55.572400Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.138971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpathplugin-l2fk7\" ","response":"range_response_count:1 size:12993"}
	{"level":"info","ts":"2024-12-13T19:20:55.572439Z","caller":"traceutil/trace.go:171","msg":"trace[1715638475] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpathplugin-l2fk7; range_end:; response_count:1; response_revision:1229; }","duration":"103.18936ms","start":"2024-12-13T19:20:55.469241Z","end":"2024-12-13T19:20:55.572430Z","steps":["trace[1715638475] 'agreement among raft nodes before linearized reading'  (duration: 102.574073ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571644Z","caller":"traceutil/trace.go:171","msg":"trace[1022539298] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"106.60575ms","start":"2024-12-13T19:20:55.465021Z","end":"2024-12-13T19:20:55.571627Z","steps":["trace[1022539298] 'process raft request'  (duration: 52.365558ms)","trace[1022539298] 'compare'  (duration: 53.748928ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:25:52 up  3:07,  0 users,  load average: 0.27, 1.47, 2.44
	Linux addons-248098 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] <==
	I1213 19:23:49.335955       1 main.go:301] handling current node
	I1213 19:23:59.334937       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:23:59.334973       1 main.go:301] handling current node
	I1213 19:24:09.335192       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:09.335337       1 main.go:301] handling current node
	I1213 19:24:19.334956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:19.334996       1 main.go:301] handling current node
	I1213 19:24:29.335200       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:29.335314       1 main.go:301] handling current node
	I1213 19:24:39.342442       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:39.342480       1 main.go:301] handling current node
	I1213 19:24:49.343677       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:49.343722       1 main.go:301] handling current node
	I1213 19:24:59.335245       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:24:59.335276       1 main.go:301] handling current node
	I1213 19:25:09.339935       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:09.339972       1 main.go:301] handling current node
	I1213 19:25:19.343530       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:19.343580       1 main.go:301] handling current node
	I1213 19:25:29.335421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:29.335459       1 main.go:301] handling current node
	I1213 19:25:39.340798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:39.340837       1 main.go:301] handling current node
	I1213 19:25:49.335813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:49.335848       1 main.go:301] handling current node
	
	
	==> kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] <==
	 > logger="UnhandledError"
	E1213 19:20:49.877444       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.72.3:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.72.3:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.72.3:443: connect: connection refused" logger="UnhandledError"
	I1213 19:20:50.167714       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 19:21:34.791748       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36812: use of closed network connection
	E1213 19:21:35.211372       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36854: use of closed network connection
	I1213 19:21:44.609678       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.201.169"}
	I1213 19:22:47.019728       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 19:23:10.541433       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.547768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.579260       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.579633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.594662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.594705       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.604171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.604219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.822739       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.822775       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:23:11.599147       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:23:11.823336       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 19:23:11.835647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 19:23:24.385887       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:23:25.508597       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1213 19:23:29.989652       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:23:30.374495       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.136.243"}
	I1213 19:25:50.130689       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.68.160"}
	
	
	==> kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] <==
	W1213 19:24:04.938570       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:24:04.938615       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:24:19.053058       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:24:19.053103       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:24:26.780082       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:24:26.780128       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:24:30.599554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:24:30.599676       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:24:38.856914       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:24:38.856957       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:25:03.136050       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:03.136094       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:25:15.756367       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:15.756414       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:25:21.780122       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:21.780253       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:25:28.782508       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:28.782549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:25:36.223807       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:36.223851       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:25:49.876915       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="63.23349ms"
	I1213 19:25:49.888298       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="10.971363ms"
	I1213 19:25:49.888753       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="64.222µs"
	I1213 19:25:51.906869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.483711ms"
	I1213 19:25:51.906943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.914µs"
	
	
	==> kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] <==
	I1213 19:19:26.856575       1 server_linux.go:66] "Using iptables proxy"
	I1213 19:19:27.434663       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1213 19:19:27.434728       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:19:28.459633       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:19:28.459776       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:19:28.462676       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:19:28.463711       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:19:28.463786       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:19:28.553230       1 config.go:199] "Starting service config controller"
	I1213 19:19:28.553341       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:19:28.553931       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:19:28.575024       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:19:28.554133       1 config.go:328] "Starting node config controller"
	I1213 19:19:28.575148       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:19:28.710663       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:19:28.710979       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:19:28.711013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] <==
	W1213 19:19:13.992373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:19:13.994022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:13.992722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:19:13.994124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.810363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:19:14.811859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.830724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 19:19:14.830897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.876077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:19:14.876126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.882112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:14.882231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.988961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 19:19:14.989138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.991077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:19:14.991192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.093515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:15.093566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.127159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 19:19:15.127305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.207229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:15.207275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.258698       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:19:15.258955       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1213 19:19:17.658668       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:24:06 addons-248098 kubelet[1527]: E1213 19:24:06.718238    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117846717930318,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:16 addons-248098 kubelet[1527]: E1213 19:24:16.585117    1527 container_manager_linux.go:513] "Failed to find cgroups of kubelet" err="cpu and memory cgroup hierarchy not unified.  cpu: /docker/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776, memory: /docker/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/system.slice/kubelet.service"
	Dec 13 19:24:16 addons-248098 kubelet[1527]: E1213 19:24:16.720992    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117856720670706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:16 addons-248098 kubelet[1527]: E1213 19:24:16.721071    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117856720670706,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:26 addons-248098 kubelet[1527]: E1213 19:24:26.726872    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117866725441149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:26 addons-248098 kubelet[1527]: E1213 19:24:26.726911    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117866725441149,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:36 addons-248098 kubelet[1527]: E1213 19:24:36.729337    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117876729082662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:36 addons-248098 kubelet[1527]: E1213 19:24:36.729377    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117876729082662,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:46 addons-248098 kubelet[1527]: E1213 19:24:46.731674    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117886731412820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:46 addons-248098 kubelet[1527]: E1213 19:24:46.731710    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117886731412820,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:56 addons-248098 kubelet[1527]: E1213 19:24:56.734459    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117896734174638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:24:56 addons-248098 kubelet[1527]: E1213 19:24:56.734504    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117896734174638,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:06 addons-248098 kubelet[1527]: E1213 19:25:06.737519    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117906737255996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:06 addons-248098 kubelet[1527]: E1213 19:25:06.737557    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117906737255996,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:14 addons-248098 kubelet[1527]: I1213 19:25:14.495042    1527 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:25:16 addons-248098 kubelet[1527]: E1213 19:25:16.740664    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117916740427183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:16 addons-248098 kubelet[1527]: E1213 19:25:16.740699    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117916740427183,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:26 addons-248098 kubelet[1527]: E1213 19:25:26.744014    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117926743736063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:26 addons-248098 kubelet[1527]: E1213 19:25:26.744061    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117926743736063,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:36 addons-248098 kubelet[1527]: E1213 19:25:36.746342    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117936746068138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:36 addons-248098 kubelet[1527]: E1213 19:25:36.746383    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117936746068138,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:46 addons-248098 kubelet[1527]: E1213 19:25:46.749342    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117946749077931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:46 addons-248098 kubelet[1527]: E1213 19:25:46.749383    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117946749077931,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:606284,},InodesUsed:&UInt64Value{Value:231,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:25:49 addons-248098 kubelet[1527]: I1213 19:25:49.847654    1527 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="default/nginx" podStartSLOduration=137.206649815 podStartE2EDuration="2m19.847634724s" podCreationTimestamp="2024-12-13 19:23:30 +0000 UTC" firstStartedPulling="2024-12-13 19:23:30.641187923 +0000 UTC m=+254.266524798" lastFinishedPulling="2024-12-13 19:23:33.282172832 +0000 UTC m=+256.907509707" observedRunningTime="2024-12-13 19:23:33.61062549 +0000 UTC m=+257.235962357" watchObservedRunningTime="2024-12-13 19:25:49.847634724 +0000 UTC m=+393.472971590"
	Dec 13 19:25:50 addons-248098 kubelet[1527]: I1213 19:25:49.994736    1527 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xh8gv\" (UniqueName: \"kubernetes.io/projected/14af58da-f64c-47d5-98c4-0b019b2ce7f2-kube-api-access-xh8gv\") pod \"hello-world-app-55bf9c44b4-z9wlr\" (UID: \"14af58da-f64c-47d5-98c4-0b019b2ce7f2\") " pod="default/hello-world-app-55bf9c44b4-z9wlr"
	
	
	==> storage-provisioner [0c0704d382a69b93cc22a51e1e8cf786c5e6bb3b37718a2ca963a7aa91566d92] <==
	I1213 19:19:40.443989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:19:40.470077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:19:40.470130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:19:40.503801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:19:40.507064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-248098_ef556317-08dd-4573-8f53-d898928781c1!
	I1213 19:19:40.511401       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbcfe82e-6948-4068-b720-61c573d1f4fc", APIVersion:"v1", ResourceVersion:"893", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-248098_ef556317-08dd-4573-8f53-d898928781c1 became leader
	I1213 19:19:40.607521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-248098_ef556317-08dd-4573-8f53-d898928781c1!
	E1213 19:23:09.647789       1 controller.go:1050] claim "1495a858-fb44-41da-96f5-75a367db6d66" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-248098 -n addons-248098
helpers_test.go:261: (dbg) Run:  kubectl --context addons-248098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: ingress-nginx-admission-create-2fpd2 ingress-nginx-admission-patch-7r99g
helpers_test.go:274: ======> post-mortem[TestAddons/parallel/Ingress]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context addons-248098 describe pod ingress-nginx-admission-create-2fpd2 ingress-nginx-admission-patch-7r99g
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context addons-248098 describe pod ingress-nginx-admission-create-2fpd2 ingress-nginx-admission-patch-7r99g: exit status 1 (83.770496ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "ingress-nginx-admission-create-2fpd2" not found
	Error from server (NotFound): pods "ingress-nginx-admission-patch-7r99g" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context addons-248098 describe pod ingress-nginx-admission-create-2fpd2 ingress-nginx-admission-patch-7r99g: exit status 1
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable ingress-dns --alsologtostderr -v=1: (1.065423617s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable ingress --alsologtostderr -v=1: (8.259224055s)
--- FAIL: TestAddons/parallel/Ingress (152.84s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (305.61s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 6.708133ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-84c5f94fbc-g7jcr" [a41f7493-f390-4111-9ecf-6b9c91d88986] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.00394119s
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (96.130712ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m2.4260131s

                                                
                                                
** /stderr **
I1213 19:22:25.429034  602199 retry.go:31] will retry after 2.091850592s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (94.423982ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m4.613787963s

                                                
                                                
** /stderr **
I1213 19:22:27.616556  602199 retry.go:31] will retry after 5.734644023s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (92.541609ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m10.440842314s

                                                
                                                
** /stderr **
I1213 19:22:33.444103  602199 retry.go:31] will retry after 6.797130864s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (138.646321ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m17.37722337s

                                                
                                                
** /stderr **
I1213 19:22:40.380215  602199 retry.go:31] will retry after 8.458294425s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (198.659947ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m26.035035747s

                                                
                                                
** /stderr **
I1213 19:22:49.038420  602199 retry.go:31] will retry after 17.335173696s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (91.176736ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 3m43.462459958s

                                                
                                                
** /stderr **
I1213 19:23:06.465629  602199 retry.go:31] will retry after 32.612004274s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (83.463064ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 4m16.157998775s

                                                
                                                
** /stderr **
I1213 19:23:39.162043  602199 retry.go:31] will retry after 47.456508054s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (85.904664ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 5m3.701095213s

                                                
                                                
** /stderr **
I1213 19:24:26.705449  602199 retry.go:31] will retry after 1m14.476867415s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (84.255576ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 6m18.26360013s

                                                
                                                
** /stderr **
I1213 19:25:41.266902  602199 retry.go:31] will retry after 33.59307492s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (98.452214ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 6m51.961318934s

                                                
                                                
** /stderr **
I1213 19:26:14.965152  602199 retry.go:31] will retry after 1m6.739119999s: exit status 1
addons_test.go:402: (dbg) Run:  kubectl --context addons-248098 top pods -n kube-system
addons_test.go:402: (dbg) Non-zero exit: kubectl --context addons-248098 top pods -n kube-system: exit status 1 (77.523025ms)

                                                
                                                
** stderr ** 
	error: Metrics not available for pod kube-system/coredns-7c65d6cfc9-bt6ls, age: 7m58.784619012s

                                                
                                                
** /stderr **
addons_test.go:416: failed checking metric server: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:230: ======>  post-mortem[TestAddons/parallel/MetricsServer]: docker inspect <======
helpers_test.go:231: (dbg) Run:  docker inspect addons-248098
helpers_test.go:235: (dbg) docker inspect addons-248098:

                                                
                                                
-- stdout --
	[
	    {
	        "Id": "71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776",
	        "Created": "2024-12-13T19:18:52.315159725Z",
	        "Path": "/usr/local/bin/entrypoint",
	        "Args": [
	            "/sbin/init"
	        ],
	        "State": {
	            "Status": "running",
	            "Running": true,
	            "Paused": false,
	            "Restarting": false,
	            "OOMKilled": false,
	            "Dead": false,
	            "Pid": 603478,
	            "ExitCode": 0,
	            "Error": "",
	            "StartedAt": "2024-12-13T19:18:52.484535042Z",
	            "FinishedAt": "0001-01-01T00:00:00Z"
	        },
	        "Image": "sha256:7cd263f59e19eeefdb79b99186c433854c2243e3d7fa2988b2d817cac7fc54f8",
	        "ResolvConfPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/resolv.conf",
	        "HostnamePath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/hostname",
	        "HostsPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/hosts",
	        "LogPath": "/var/lib/docker/containers/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776/71118ff07ec6fa79104cf400f95c50c9ae227a1aad64456bb5c81d1d75958776-json.log",
	        "Name": "/addons-248098",
	        "RestartCount": 0,
	        "Driver": "overlay2",
	        "Platform": "linux",
	        "MountLabel": "",
	        "ProcessLabel": "",
	        "AppArmorProfile": "unconfined",
	        "ExecIDs": null,
	        "HostConfig": {
	            "Binds": [
	                "/lib/modules:/lib/modules:ro",
	                "addons-248098:/var"
	            ],
	            "ContainerIDFile": "",
	            "LogConfig": {
	                "Type": "json-file",
	                "Config": {}
	            },
	            "NetworkMode": "addons-248098",
	            "PortBindings": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": ""
	                    }
	                ]
	            },
	            "RestartPolicy": {
	                "Name": "no",
	                "MaximumRetryCount": 0
	            },
	            "AutoRemove": false,
	            "VolumeDriver": "",
	            "VolumesFrom": null,
	            "ConsoleSize": [
	                0,
	                0
	            ],
	            "CapAdd": null,
	            "CapDrop": null,
	            "CgroupnsMode": "host",
	            "Dns": [],
	            "DnsOptions": [],
	            "DnsSearch": [],
	            "ExtraHosts": null,
	            "GroupAdd": null,
	            "IpcMode": "private",
	            "Cgroup": "",
	            "Links": null,
	            "OomScoreAdj": 0,
	            "PidMode": "",
	            "Privileged": true,
	            "PublishAllPorts": false,
	            "ReadonlyRootfs": false,
	            "SecurityOpt": [
	                "seccomp=unconfined",
	                "apparmor=unconfined",
	                "label=disable"
	            ],
	            "Tmpfs": {
	                "/run": "",
	                "/tmp": ""
	            },
	            "UTSMode": "",
	            "UsernsMode": "",
	            "ShmSize": 67108864,
	            "Runtime": "runc",
	            "Isolation": "",
	            "CpuShares": 0,
	            "Memory": 4194304000,
	            "NanoCpus": 2000000000,
	            "CgroupParent": "",
	            "BlkioWeight": 0,
	            "BlkioWeightDevice": [],
	            "BlkioDeviceReadBps": [],
	            "BlkioDeviceWriteBps": [],
	            "BlkioDeviceReadIOps": [],
	            "BlkioDeviceWriteIOps": [],
	            "CpuPeriod": 0,
	            "CpuQuota": 0,
	            "CpuRealtimePeriod": 0,
	            "CpuRealtimeRuntime": 0,
	            "CpusetCpus": "",
	            "CpusetMems": "",
	            "Devices": [],
	            "DeviceCgroupRules": null,
	            "DeviceRequests": null,
	            "MemoryReservation": 0,
	            "MemorySwap": 8388608000,
	            "MemorySwappiness": null,
	            "OomKillDisable": false,
	            "PidsLimit": null,
	            "Ulimits": [],
	            "CpuCount": 0,
	            "CpuPercent": 0,
	            "IOMaximumIOps": 0,
	            "IOMaximumBandwidth": 0,
	            "MaskedPaths": null,
	            "ReadonlyPaths": null
	        },
	        "GraphDriver": {
	            "Data": {
	                "LowerDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4-init/diff:/var/lib/docker/overlay2/7f60ef155cdf2fdd139012aca07bc58fe52fb18f995aec2de9b3156cc93a5c4e/diff",
	                "MergedDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/merged",
	                "UpperDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/diff",
	                "WorkDir": "/var/lib/docker/overlay2/1eb625f67d65df01d611761ab1363a88e29135b4887423b64c40650d552835e4/work"
	            },
	            "Name": "overlay2"
	        },
	        "Mounts": [
	            {
	                "Type": "bind",
	                "Source": "/lib/modules",
	                "Destination": "/lib/modules",
	                "Mode": "ro",
	                "RW": false,
	                "Propagation": "rprivate"
	            },
	            {
	                "Type": "volume",
	                "Name": "addons-248098",
	                "Source": "/var/lib/docker/volumes/addons-248098/_data",
	                "Destination": "/var",
	                "Driver": "local",
	                "Mode": "z",
	                "RW": true,
	                "Propagation": ""
	            }
	        ],
	        "Config": {
	            "Hostname": "addons-248098",
	            "Domainname": "",
	            "User": "",
	            "AttachStdin": false,
	            "AttachStdout": false,
	            "AttachStderr": false,
	            "ExposedPorts": {
	                "22/tcp": {},
	                "2376/tcp": {},
	                "32443/tcp": {},
	                "5000/tcp": {},
	                "8443/tcp": {}
	            },
	            "Tty": true,
	            "OpenStdin": false,
	            "StdinOnce": false,
	            "Env": [
	                "container=docker",
	                "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
	            ],
	            "Cmd": null,
	            "Image": "gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9",
	            "Volumes": null,
	            "WorkingDir": "/",
	            "Entrypoint": [
	                "/usr/local/bin/entrypoint",
	                "/sbin/init"
	            ],
	            "OnBuild": null,
	            "Labels": {
	                "created_by.minikube.sigs.k8s.io": "true",
	                "mode.minikube.sigs.k8s.io": "addons-248098",
	                "name.minikube.sigs.k8s.io": "addons-248098",
	                "role.minikube.sigs.k8s.io": ""
	            },
	            "StopSignal": "SIGRTMIN+3"
	        },
	        "NetworkSettings": {
	            "Bridge": "",
	            "SandboxID": "70c555dc0bf616658c39517ca754bbc8d0217eecb668e8d418b78ab6f8b69a36",
	            "SandboxKey": "/var/run/docker/netns/70c555dc0bf6",
	            "Ports": {
	                "22/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33512"
	                    }
	                ],
	                "2376/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33513"
	                    }
	                ],
	                "32443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33516"
	                    }
	                ],
	                "5000/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33514"
	                    }
	                ],
	                "8443/tcp": [
	                    {
	                        "HostIp": "127.0.0.1",
	                        "HostPort": "33515"
	                    }
	                ]
	            },
	            "HairpinMode": false,
	            "LinkLocalIPv6Address": "",
	            "LinkLocalIPv6PrefixLen": 0,
	            "SecondaryIPAddresses": null,
	            "SecondaryIPv6Addresses": null,
	            "EndpointID": "",
	            "Gateway": "",
	            "GlobalIPv6Address": "",
	            "GlobalIPv6PrefixLen": 0,
	            "IPAddress": "",
	            "IPPrefixLen": 0,
	            "IPv6Gateway": "",
	            "MacAddress": "",
	            "Networks": {
	                "addons-248098": {
	                    "IPAMConfig": {
	                        "IPv4Address": "192.168.49.2"
	                    },
	                    "Links": null,
	                    "Aliases": null,
	                    "MacAddress": "02:42:c0:a8:31:02",
	                    "DriverOpts": null,
	                    "NetworkID": "26d751c067fc6e1d561dda56dbfe217bd324778a2878c8a088bc311c8b3eb10d",
	                    "EndpointID": "01de513d376697fd43bead3e31bc9770fd3b8196e20a57de45d46140386899ce",
	                    "Gateway": "192.168.49.1",
	                    "IPAddress": "192.168.49.2",
	                    "IPPrefixLen": 24,
	                    "IPv6Gateway": "",
	                    "GlobalIPv6Address": "",
	                    "GlobalIPv6PrefixLen": 0,
	                    "DNSNames": [
	                        "addons-248098",
	                        "71118ff07ec6"
	                    ]
	                }
	            }
	        }
	    }
	]

                                                
                                                
-- /stdout --
helpers_test.go:239: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p addons-248098 -n addons-248098
helpers_test.go:244: <<< TestAddons/parallel/MetricsServer FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/MetricsServer]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 logs -n 25: (1.407569553s)
helpers_test.go:252: TestAddons/parallel/MetricsServer logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |        Profile         |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	| start   | --download-only -p                                                                          | download-docker-972085 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | download-docker-972085                                                                      |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p download-docker-972085                                                                   | download-docker-972085 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-356185   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | binary-mirror-356185                                                                        |                        |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                        |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                        |         |         |                     |                     |
	|         | http://127.0.0.1:34457                                                                      |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	| delete  | -p binary-mirror-356185                                                                     | binary-mirror-356185   | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| addons  | disable dashboard -p                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | addons-248098                                                                               |                        |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | addons-248098                                                                               |                        |         |         |                     |                     |
	| start   | -p addons-248098 --wait=true                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:21 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                        |         |         |                     |                     |
	|         | --addons=registry                                                                           |                        |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                        |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                        |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                        |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                        |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                        |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                        |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                        |         |         |                     |                     |
	|         | --addons=yakd --addons=volcano                                                              |                        |         |         |                     |                     |
	|         | --addons=amd-gpu-device-plugin                                                              |                        |         |         |                     |                     |
	|         | --driver=docker                                                                             |                        |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                        |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                        |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                        |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | volcano --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | gcp-auth --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:21 UTC |
	|         | -p addons-248098                                                                            |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:21 UTC | 13 Dec 24 19:22 UTC |
	|         | headlamp --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| ip      | addons-248098 ip                                                                            | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | registry --alsologtostderr                                                                  |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | yakd --alsologtostderr -v=1                                                                 |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | disable nvidia-device-plugin                                                                |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-248098 ssh cat                                                                       | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | /opt/local-path-provisioner/pvc-3a3ae2c7-94c0-4b5c-a99c-675901123adf_default_test-pvc/file1 |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | disable cloud-spanner                                                                       |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:22 UTC | 13 Dec 24 19:22 UTC |
	|         | storage-provisioner-rancher                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable volumesnapshots                                                                     |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable csi-hostpath-driver                                                                 |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| addons  | addons-248098 addons                                                                        | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC | 13 Dec 24 19:23 UTC |
	|         | disable inspektor-gadget                                                                    |                        |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                        |         |         |                     |                     |
	| ssh     | addons-248098 ssh curl -s                                                                   | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:23 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                        |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                        |         |         |                     |                     |
	| ip      | addons-248098 ip                                                                            | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:25 UTC | 13 Dec 24 19:25 UTC |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:25 UTC | 13 Dec 24 19:25 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                        |         |         |                     |                     |
	|         | -v=1                                                                                        |                        |         |         |                     |                     |
	| addons  | addons-248098 addons disable                                                                | addons-248098          | jenkins | v1.34.0 | 13 Dec 24 19:25 UTC | 13 Dec 24 19:26 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                        |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:18:27
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:18:27.055781  602969 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:18:27.056002  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:27.056033  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:18:27.056058  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:27.056425  602969 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:18:27.057083  602969 out.go:352] Setting JSON to false
	I1213 19:18:27.058049  602969 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10823,"bootTime":1734106684,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:18:27.058208  602969 start.go:139] virtualization:  
	I1213 19:18:27.061235  602969 out.go:177] * [addons-248098] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1213 19:18:27.064472  602969 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:18:27.064499  602969 notify.go:220] Checking for updates...
	I1213 19:18:27.068685  602969 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:18:27.070671  602969 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:18:27.073273  602969 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:18:27.075308  602969 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:18:27.077562  602969 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:18:27.080283  602969 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:18:27.115987  602969 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:18:27.116107  602969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:27.170408  602969 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:27.161534867 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:27.170522  602969 docker.go:318] overlay module found
	I1213 19:18:27.172847  602969 out.go:177] * Using the docker driver based on user configuration
	I1213 19:18:27.175263  602969 start.go:297] selected driver: docker
	I1213 19:18:27.175290  602969 start.go:901] validating driver "docker" against <nil>
	I1213 19:18:27.175322  602969 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:18:27.176042  602969 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:27.234146  602969 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:27.225155419 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:27.234392  602969 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:18:27.234624  602969 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:18:27.236939  602969 out.go:177] * Using Docker driver with root privileges
	I1213 19:18:27.239129  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:18:27.239209  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:27.239231  602969 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:18:27.239319  602969 start.go:340] cluster config:
	{Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:27.243510  602969 out.go:177] * Starting "addons-248098" primary control-plane node in "addons-248098" cluster
	I1213 19:18:27.245521  602969 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:18:27.247755  602969 out.go:177] * Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:18:27.249819  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:27.249905  602969 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:18:27.249904  602969 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:18:27.249919  602969 cache.go:56] Caching tarball of preloaded images
	I1213 19:18:27.250084  602969 preload.go:172] Found /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 in cache, skipping download
	I1213 19:18:27.250176  602969 cache.go:59] Finished verifying existence of preloaded tar for v1.31.2 on crio
	I1213 19:18:27.250719  602969 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json ...
	I1213 19:18:27.250768  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json: {Name:mk4985bbfdf21426c540bab4f5039b3f705d29dc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:27.266401  602969 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:18:27.266532  602969 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:18:27.266554  602969 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:18:27.266559  602969 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:18:27.266567  602969 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:18:27.266572  602969 cache.go:162] Loading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from local cache
	I1213 19:18:45.145346  602969 cache.go:164] successfully loaded and using gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 from cached tarball
	I1213 19:18:45.145398  602969 cache.go:194] Successfully downloaded all kic artifacts
	I1213 19:18:45.145432  602969 start.go:360] acquireMachinesLock for addons-248098: {Name:mk90cd79b2d7e9671af7af8749755f35a5159dc4 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I1213 19:18:45.147734  602969 start.go:364] duration metric: took 2.261167ms to acquireMachinesLock for "addons-248098"
	I1213 19:18:45.147808  602969 start.go:93] Provisioning new machine with config: &{Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:min
ikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQe
muFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:18:45.147954  602969 start.go:125] createHost starting for "" (driver="docker")
	I1213 19:18:45.160055  602969 out.go:235] * Creating docker container (CPUs=2, Memory=4000MB) ...
	I1213 19:18:45.160417  602969 start.go:159] libmachine.API.Create for "addons-248098" (driver="docker")
	I1213 19:18:45.160457  602969 client.go:168] LocalClient.Create starting
	I1213 19:18:45.160603  602969 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem
	I1213 19:18:45.524939  602969 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem
	I1213 19:18:45.865688  602969 cli_runner.go:164] Run: docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	W1213 19:18:45.887665  602969 cli_runner.go:211] docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
	I1213 19:18:45.887747  602969 network_create.go:284] running [docker network inspect addons-248098] to gather additional debugging logs...
	I1213 19:18:45.887768  602969 cli_runner.go:164] Run: docker network inspect addons-248098
	W1213 19:18:45.903678  602969 cli_runner.go:211] docker network inspect addons-248098 returned with exit code 1
	I1213 19:18:45.903718  602969 network_create.go:287] error running [docker network inspect addons-248098]: docker network inspect addons-248098: exit status 1
	stdout:
	[]
	
	stderr:
	Error response from daemon: network addons-248098 not found
	I1213 19:18:45.903730  602969 network_create.go:289] output of [docker network inspect addons-248098]: -- stdout --
	[]
	
	-- /stdout --
	** stderr ** 
	Error response from daemon: network addons-248098 not found
	
	** /stderr **
	I1213 19:18:45.903836  602969 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:18:45.920427  602969 network.go:206] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ced260}
	I1213 19:18:45.920475  602969 network_create.go:124] attempt to create docker network addons-248098 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ...
	I1213 19:18:45.920541  602969 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=addons-248098 addons-248098
	I1213 19:18:45.995385  602969 network_create.go:108] docker network addons-248098 192.168.49.0/24 created
	I1213 19:18:45.995429  602969 kic.go:121] calculated static IP "192.168.49.2" for the "addons-248098" container
	I1213 19:18:45.995509  602969 cli_runner.go:164] Run: docker ps -a --format {{.Names}}
	I1213 19:18:46.022595  602969 cli_runner.go:164] Run: docker volume create addons-248098 --label name.minikube.sigs.k8s.io=addons-248098 --label created_by.minikube.sigs.k8s.io=true
	I1213 19:18:46.040850  602969 oci.go:103] Successfully created a docker volume addons-248098
	I1213 19:18:46.040947  602969 cli_runner.go:164] Run: docker run --rm --name addons-248098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --entrypoint /usr/bin/test -v addons-248098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib
	I1213 19:18:48.146229  602969 cli_runner.go:217] Completed: docker run --rm --name addons-248098-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --entrypoint /usr/bin/test -v addons-248098:/var gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -d /var/lib: (2.10523956s)
	I1213 19:18:48.146288  602969 oci.go:107] Successfully prepared a docker volume addons-248098
	I1213 19:18:48.146330  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:48.146358  602969 kic.go:194] Starting extracting preloaded images to volume ...
	I1213 19:18:48.146436  602969 cli_runner.go:164] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-248098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir
	I1213 19:18:52.242035  602969 cli_runner.go:217] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4:/preloaded.tar:ro -v addons-248098:/extractDir gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 -I lz4 -xf /preloaded.tar -C /extractDir: (4.095557848s)
	I1213 19:18:52.242067  602969 kic.go:203] duration metric: took 4.095714766s to extract preloaded images to volume ...
	W1213 19:18:52.242215  602969 cgroups_linux.go:77] Your kernel does not support swap limit capabilities or the cgroup is not mounted.
	I1213 19:18:52.242404  602969 cli_runner.go:164] Run: docker info --format "'{{json .SecurityOptions}}'"
	I1213 19:18:52.300006  602969 cli_runner.go:164] Run: docker run -d -t --privileged --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname addons-248098 --name addons-248098 --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=addons-248098 --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=addons-248098 --network addons-248098 --ip 192.168.49.2 --volume addons-248098:/var --security-opt apparmor=unconfined --memory=4000mb --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9
	I1213 19:18:52.684589  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Running}}
	I1213 19:18:52.708745  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:52.731796  602969 cli_runner.go:164] Run: docker exec addons-248098 stat /var/lib/dpkg/alternatives/iptables
	I1213 19:18:52.783069  602969 oci.go:144] the created container "addons-248098" has a running status.
	I1213 19:18:52.783097  602969 kic.go:225] Creating ssh key for kic: /home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa...
	I1213 19:18:53.755040  602969 kic_runner.go:191] docker (temp): /home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes)
	I1213 19:18:53.776445  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:53.796248  602969 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys
	I1213 19:18:53.796269  602969 kic_runner.go:114] Args: [docker exec --privileged addons-248098 chown docker:docker /home/docker/.ssh/authorized_keys]
	I1213 19:18:53.850724  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:18:53.868534  602969 machine.go:93] provisionDockerMachine start ...
	I1213 19:18:53.868627  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:53.888235  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:53.888513  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:53.888529  602969 main.go:141] libmachine: About to run SSH command:
	hostname
	I1213 19:18:54.034254  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-248098
	
	I1213 19:18:54.034303  602969 ubuntu.go:169] provisioning hostname "addons-248098"
	I1213 19:18:54.034373  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.054913  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:54.055181  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:54.055206  602969 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-248098 && echo "addons-248098" | sudo tee /etc/hostname
	I1213 19:18:54.214170  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-248098
	
	I1213 19:18:54.214261  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.231476  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:54.231736  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:54.231760  602969 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-248098' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-248098/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-248098' | sudo tee -a /etc/hosts; 
				fi
			fi
	I1213 19:18:54.378633  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I1213 19:18:54.378661  602969 ubuntu.go:175] set auth options {CertDir:/home/jenkins/minikube-integration/20090-596807/.minikube CaCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/20090-596807/.minikube}
	I1213 19:18:54.378688  602969 ubuntu.go:177] setting up certificates
	I1213 19:18:54.378698  602969 provision.go:84] configureAuth start
	I1213 19:18:54.378769  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:54.395597  602969 provision.go:143] copyHostCerts
	I1213 19:18:54.395681  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/ca.pem (1082 bytes)
	I1213 19:18:54.395809  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/cert.pem (1123 bytes)
	I1213 19:18:54.395898  602969 exec_runner.go:151] cp: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/20090-596807/.minikube/key.pem (1679 bytes)
	I1213 19:18:54.395967  602969 provision.go:117] generating server cert: /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem org=jenkins.addons-248098 san=[127.0.0.1 192.168.49.2 addons-248098 localhost minikube]
	I1213 19:18:54.809899  602969 provision.go:177] copyRemoteCerts
	I1213 19:18:54.809970  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I1213 19:18:54.810013  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:54.827762  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:54.931460  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I1213 19:18:54.956121  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I1213 19:18:54.980178  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I1213 19:18:55.013876  602969 provision.go:87] duration metric: took 635.158103ms to configureAuth
	I1213 19:18:55.013918  602969 ubuntu.go:193] setting minikube options for container-runtime
	I1213 19:18:55.014153  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:18:55.014302  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.071101  602969 main.go:141] libmachine: Using SSH client type: native
	I1213 19:18:55.071374  602969 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x416340] 0x418b80 <nil>  [] 0s} 127.0.0.1 33512 <nil> <nil>}
	I1213 19:18:55.071398  602969 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I1213 19:18:55.329830  602969 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I1213 19:18:55.329852  602969 machine.go:96] duration metric: took 1.461297288s to provisionDockerMachine
	I1213 19:18:55.329863  602969 client.go:171] duration metric: took 10.169398436s to LocalClient.Create
	I1213 19:18:55.329883  602969 start.go:167] duration metric: took 10.169469633s to libmachine.API.Create "addons-248098"
	I1213 19:18:55.329891  602969 start.go:293] postStartSetup for "addons-248098" (driver="docker")
	I1213 19:18:55.329901  602969 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I1213 19:18:55.329970  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I1213 19:18:55.330017  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.347094  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.447547  602969 ssh_runner.go:195] Run: cat /etc/os-release
	I1213 19:18:55.450755  602969 main.go:141] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found
	I1213 19:18:55.450790  602969 main.go:141] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found
	I1213 19:18:55.450804  602969 main.go:141] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found
	I1213 19:18:55.450812  602969 info.go:137] Remote host: Ubuntu 22.04.5 LTS
	I1213 19:18:55.450823  602969 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-596807/.minikube/addons for local assets ...
	I1213 19:18:55.450895  602969 filesync.go:126] Scanning /home/jenkins/minikube-integration/20090-596807/.minikube/files for local assets ...
	I1213 19:18:55.450920  602969 start.go:296] duration metric: took 121.02358ms for postStartSetup
	I1213 19:18:55.451245  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:55.467710  602969 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/config.json ...
	I1213 19:18:55.468009  602969 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:18:55.468062  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.485705  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.583312  602969 ssh_runner.go:195] Run: sh -c "df -BG /var | awk 'NR==2{print $4}'"
	I1213 19:18:55.588009  602969 start.go:128] duration metric: took 10.4400343s to createHost
	I1213 19:18:55.588034  602969 start.go:83] releasing machines lock for "addons-248098", held for 10.440259337s
	I1213 19:18:55.588121  602969 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" addons-248098
	I1213 19:18:55.604873  602969 ssh_runner.go:195] Run: cat /version.json
	I1213 19:18:55.604925  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.605175  602969 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I1213 19:18:55.605236  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:18:55.625747  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.641932  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:18:55.865572  602969 ssh_runner.go:195] Run: systemctl --version
	I1213 19:18:55.869608  602969 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I1213 19:18:56.023964  602969 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I1213 19:18:56.028708  602969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *loopback.conf* -not -name *.mk_disabled -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:18:56.050673  602969 cni.go:221] loopback cni configuration disabled: "/etc/cni/net.d/*loopback.conf*" found
	I1213 19:18:56.050757  602969 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I1213 19:18:56.083729  602969 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist, /etc/cni/net.d/100-crio-bridge.conf] bridge cni config(s)
	I1213 19:18:56.083750  602969 start.go:495] detecting cgroup driver to use...
	I1213 19:18:56.083785  602969 detect.go:187] detected "cgroupfs" cgroup driver on host os
	I1213 19:18:56.083835  602969 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I1213 19:18:56.099746  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I1213 19:18:56.111443  602969 docker.go:217] disabling cri-docker service (if available) ...
	I1213 19:18:56.111553  602969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I1213 19:18:56.125763  602969 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I1213 19:18:56.140789  602969 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I1213 19:18:56.237908  602969 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I1213 19:18:56.325624  602969 docker.go:233] disabling docker service ...
	I1213 19:18:56.325743  602969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I1213 19:18:56.346209  602969 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I1213 19:18:56.359581  602969 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I1213 19:18:56.451957  602969 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I1213 19:18:56.550085  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I1213 19:18:56.563345  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I1213 19:18:56.581145  602969 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.10" pause image...
	I1213 19:18:56.581234  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.10"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.592261  602969 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I1213 19:18:56.592350  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.603099  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.613956  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.624912  602969 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I1213 19:18:56.634235  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *"net.ipv4.ip_unprivileged_port_start=.*"/d' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.644471  602969 ssh_runner.go:195] Run: sh -c "sudo grep -q "^ *default_sysctls" /etc/crio/crio.conf.d/02-crio.conf || sudo sed -i '/conmon_cgroup = .*/a default_sysctls = \[\n\]' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.660595  602969 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^default_sysctls *= *\[|&\n  "net.ipv4.ip_unprivileged_port_start=0",|' /etc/crio/crio.conf.d/02-crio.conf"
	I1213 19:18:56.670646  602969 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I1213 19:18:56.679462  602969 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I1213 19:18:56.688378  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:18:56.766966  602969 ssh_runner.go:195] Run: sudo systemctl restart crio
	I1213 19:18:56.886569  602969 start.go:542] Will wait 60s for socket path /var/run/crio/crio.sock
	I1213 19:18:56.886719  602969 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I1213 19:18:56.890716  602969 start.go:563] Will wait 60s for crictl version
	I1213 19:18:56.890833  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:18:56.894217  602969 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I1213 19:18:56.932502  602969 start.go:579] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.24.6
	RuntimeApiVersion:  v1
	I1213 19:18:56.932611  602969 ssh_runner.go:195] Run: crio --version
	I1213 19:18:56.971355  602969 ssh_runner.go:195] Run: crio --version
	I1213 19:18:57.021813  602969 out.go:177] * Preparing Kubernetes v1.31.2 on CRI-O 1.24.6 ...
	I1213 19:18:57.024214  602969 cli_runner.go:164] Run: docker network inspect addons-248098 --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
	I1213 19:18:57.042490  602969 ssh_runner.go:195] Run: grep 192.168.49.1	host.minikube.internal$ /etc/hosts
	I1213 19:18:57.046587  602969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:18:57.059450  602969 kubeadm.go:883] updating cluster {Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNa
mes:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmw
arePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I1213 19:18:57.059574  602969 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:57.059643  602969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:18:57.137675  602969 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:18:57.137698  602969 crio.go:433] Images already preloaded, skipping extraction
	I1213 19:18:57.137761  602969 ssh_runner.go:195] Run: sudo crictl images --output json
	I1213 19:18:57.173787  602969 crio.go:514] all images are preloaded for cri-o runtime.
	I1213 19:18:57.173812  602969 cache_images.go:84] Images are preloaded, skipping loading
	I1213 19:18:57.173820  602969 kubeadm.go:934] updating node { 192.168.49.2 8443 v1.31.2 crio true true} ...
	I1213 19:18:57.173921  602969 kubeadm.go:946] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --cgroups-per-qos=false --config=/var/lib/kubelet/config.yaml --enforce-node-allocatable= --hostname-override=addons-248098 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I1213 19:18:57.174003  602969 ssh_runner.go:195] Run: crio config
	I1213 19:18:57.222356  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:18:57.222379  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:57.222389  602969 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I1213 19:18:57.222411  602969 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.31.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-248098 NodeName:addons-248098 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kuberne
tes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I1213 19:18:57.222539  602969 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.49.2
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-248098"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "192.168.49.2"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.49.2"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.31.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I1213 19:18:57.222611  602969 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.2
	I1213 19:18:57.231575  602969 binaries.go:44] Found k8s binaries, skipping transfer
	I1213 19:18:57.231687  602969 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I1213 19:18:57.240565  602969 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (363 bytes)
	I1213 19:18:57.259184  602969 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I1213 19:18:57.277514  602969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2287 bytes)
	I1213 19:18:57.297096  602969 ssh_runner.go:195] Run: grep 192.168.49.2	control-plane.minikube.internal$ /etc/hosts
	I1213 19:18:57.300762  602969 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I1213 19:18:57.311998  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:18:57.393593  602969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:18:57.407504  602969 certs.go:68] Setting up /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098 for IP: 192.168.49.2
	I1213 19:18:57.407530  602969 certs.go:194] generating shared ca certs ...
	I1213 19:18:57.407547  602969 certs.go:226] acquiring lock for ca certs: {Name:mk3cdd0ea94f7f906448b193b6df25da3e2261b6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.407685  602969 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key
	I1213 19:18:57.753657  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt ...
	I1213 19:18:57.753689  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt: {Name:mkd47ec227d5a0a992364ca75af37df461bf8251 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.754556  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key ...
	I1213 19:18:57.754574  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key: {Name:mk99e7ab436fef1f7051dabcc331ea2d120ce21b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.754673  602969 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key
	I1213 19:18:57.965859  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt ...
	I1213 19:18:57.965891  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt: {Name:mkd3882d2ccf5bff7977b8f91ec4b985ade96ca8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.966508  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key ...
	I1213 19:18:57.966527  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key: {Name:mk9f1e77620da4f62399f28c89e1e49e6502ff2a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:57.966625  602969 certs.go:256] generating profile certs ...
	I1213 19:18:57.966697  602969 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key
	I1213 19:18:57.966723  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt with IP's: []
	I1213 19:18:58.272499  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt ...
	I1213 19:18:58.272535  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: {Name:mk65d52d2f3cffee39c58a204c5c86169e26beed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.273970  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key ...
	I1213 19:18:58.273989  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.key: {Name:mk7cf318e896508552eb82f0ebadb2445f7082e5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.274084  602969 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425
	I1213 19:18:58.274106  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.49.2]
	I1213 19:18:58.651536  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 ...
	I1213 19:18:58.651567  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425: {Name:mke53ea42652e58e64dcdd4b89ef7f4a4a14f85c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.652283  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425 ...
	I1213 19:18:58.652304  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425: {Name:mke16db69a70a4e768d2fcef5a36f02309bb7b5b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.652951  602969 certs.go:381] copying /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt.2386a425 -> /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt
	I1213 19:18:58.653040  602969 certs.go:385] copying /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key.2386a425 -> /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key
	I1213 19:18:58.653091  602969 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key
	I1213 19:18:58.653112  602969 crypto.go:68] Generating cert /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt with IP's: []
	I1213 19:18:58.926757  602969 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt ...
	I1213 19:18:58.926786  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt: {Name:mkd3bdca2f1c30fa6d033d08e64b97c34b1ee90f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.927544  602969 crypto.go:164] Writing key to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key ...
	I1213 19:18:58.927566  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key: {Name:mk13a5ea1b680a0acc1fb9a90733ee1b8d555e62 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:58.927773  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca-key.pem (1675 bytes)
	I1213 19:18:58.927819  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/ca.pem (1082 bytes)
	I1213 19:18:58.927848  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/cert.pem (1123 bytes)
	I1213 19:18:58.927877  602969 certs.go:484] found cert: /home/jenkins/minikube-integration/20090-596807/.minikube/certs/key.pem (1679 bytes)
	I1213 19:18:58.928547  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I1213 19:18:58.976754  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I1213 19:18:59.020377  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I1213 19:18:59.047812  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I1213 19:18:59.073635  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I1213 19:18:59.099274  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I1213 19:18:59.124829  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I1213 19:18:59.150551  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I1213 19:18:59.175603  602969 ssh_runner.go:362] scp /home/jenkins/minikube-integration/20090-596807/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I1213 19:18:59.200255  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I1213 19:18:59.218487  602969 ssh_runner.go:195] Run: openssl version
	I1213 19:18:59.224085  602969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I1213 19:18:59.233761  602969 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.237304  602969 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Dec 13 19:18 /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.237375  602969 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I1213 19:18:59.244920  602969 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I1213 19:18:59.254323  602969 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I1213 19:18:59.257610  602969 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I1213 19:18:59.257675  602969 kubeadm.go:392] StartCluster: {Name:addons-248098 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:addons-248098 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames
:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmware
Path: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:59.257768  602969 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I1213 19:18:59.257862  602969 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I1213 19:18:59.301228  602969 cri.go:89] found id: ""
	I1213 19:18:59.301305  602969 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I1213 19:18:59.310353  602969 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I1213 19:18:59.319841  602969 kubeadm.go:214] ignoring SystemVerification for kubeadm because of docker driver
	I1213 19:18:59.319904  602969 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I1213 19:18:59.328815  602969 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I1213 19:18:59.328839  602969 kubeadm.go:157] found existing configuration files:
	
	I1213 19:18:59.328891  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I1213 19:18:59.338594  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I1213 19:18:59.338663  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I1213 19:18:59.347744  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I1213 19:18:59.356911  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I1213 19:18:59.356991  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I1213 19:18:59.365506  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I1213 19:18:59.374420  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I1213 19:18:59.374491  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I1213 19:18:59.383136  602969 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I1213 19:18:59.392580  602969 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I1213 19:18:59.392655  602969 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I1213 19:18:59.401105  602969 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables"
	I1213 19:18:59.449629  602969 kubeadm.go:310] [init] Using Kubernetes version: v1.31.2
	I1213 19:18:59.449991  602969 kubeadm.go:310] [preflight] Running pre-flight checks
	I1213 19:18:59.470150  602969 kubeadm.go:310] [preflight] The system verification failed. Printing the output from the verification:
	I1213 19:18:59.470313  602969 kubeadm.go:310] KERNEL_VERSION: 5.15.0-1072-aws
	I1213 19:18:59.470373  602969 kubeadm.go:310] OS: Linux
	I1213 19:18:59.470453  602969 kubeadm.go:310] CGROUPS_CPU: enabled
	I1213 19:18:59.470524  602969 kubeadm.go:310] CGROUPS_CPUACCT: enabled
	I1213 19:18:59.470597  602969 kubeadm.go:310] CGROUPS_CPUSET: enabled
	I1213 19:18:59.470667  602969 kubeadm.go:310] CGROUPS_DEVICES: enabled
	I1213 19:18:59.470745  602969 kubeadm.go:310] CGROUPS_FREEZER: enabled
	I1213 19:18:59.470813  602969 kubeadm.go:310] CGROUPS_MEMORY: enabled
	I1213 19:18:59.470888  602969 kubeadm.go:310] CGROUPS_PIDS: enabled
	I1213 19:18:59.470957  602969 kubeadm.go:310] CGROUPS_HUGETLB: enabled
	I1213 19:18:59.471051  602969 kubeadm.go:310] CGROUPS_BLKIO: enabled
	I1213 19:18:59.528975  602969 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I1213 19:18:59.529091  602969 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I1213 19:18:59.529189  602969 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I1213 19:18:59.536172  602969 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I1213 19:18:59.540072  602969 out.go:235]   - Generating certificates and keys ...
	I1213 19:18:59.540200  602969 kubeadm.go:310] [certs] Using existing ca certificate authority
	I1213 19:18:59.540286  602969 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I1213 19:19:00.246678  602969 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I1213 19:19:00.785838  602969 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I1213 19:19:01.636131  602969 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I1213 19:19:02.024791  602969 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I1213 19:19:02.790385  602969 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I1213 19:19:02.790765  602969 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [addons-248098 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:19:03.407514  602969 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I1213 19:19:03.407674  602969 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [addons-248098 localhost] and IPs [192.168.49.2 127.0.0.1 ::1]
	I1213 19:19:04.222280  602969 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I1213 19:19:04.641177  602969 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I1213 19:19:05.202907  602969 kubeadm.go:310] [certs] Generating "sa" key and public key
	I1213 19:19:05.203140  602969 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I1213 19:19:06.009479  602969 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I1213 19:19:06.181840  602969 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I1213 19:19:07.103019  602969 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I1213 19:19:07.437209  602969 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I1213 19:19:08.145533  602969 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I1213 19:19:08.146133  602969 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I1213 19:19:08.151058  602969 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I1213 19:19:08.153615  602969 out.go:235]   - Booting up control plane ...
	I1213 19:19:08.153730  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I1213 19:19:08.153813  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I1213 19:19:08.154897  602969 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I1213 19:19:08.164962  602969 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I1213 19:19:08.172281  602969 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I1213 19:19:08.172339  602969 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I1213 19:19:08.257882  602969 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I1213 19:19:08.258008  602969 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I1213 19:19:09.259532  602969 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.001649368s
	I1213 19:19:09.259630  602969 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I1213 19:19:15.761647  602969 kubeadm.go:310] [api-check] The API server is healthy after 6.502180938s
	I1213 19:19:15.781085  602969 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I1213 19:19:15.796568  602969 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I1213 19:19:15.828071  602969 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I1213 19:19:15.828279  602969 kubeadm.go:310] [mark-control-plane] Marking the node addons-248098 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I1213 19:19:15.843567  602969 kubeadm.go:310] [bootstrap-token] Using token: j5o3j6.zgtne4vwby5cxh24
	I1213 19:19:15.845663  602969 out.go:235]   - Configuring RBAC rules ...
	I1213 19:19:15.845800  602969 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I1213 19:19:15.851702  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I1213 19:19:15.859463  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I1213 19:19:15.863643  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I1213 19:19:15.867638  602969 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I1213 19:19:15.872771  602969 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I1213 19:19:16.168591  602969 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I1213 19:19:16.629547  602969 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I1213 19:19:17.174865  602969 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I1213 19:19:17.174896  602969 kubeadm.go:310] 
	I1213 19:19:17.174968  602969 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I1213 19:19:17.174973  602969 kubeadm.go:310] 
	I1213 19:19:17.175100  602969 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I1213 19:19:17.175112  602969 kubeadm.go:310] 
	I1213 19:19:17.175138  602969 kubeadm.go:310]   mkdir -p $HOME/.kube
	I1213 19:19:17.175211  602969 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I1213 19:19:17.175304  602969 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I1213 19:19:17.175319  602969 kubeadm.go:310] 
	I1213 19:19:17.175382  602969 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I1213 19:19:17.175388  602969 kubeadm.go:310] 
	I1213 19:19:17.175454  602969 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I1213 19:19:17.175463  602969 kubeadm.go:310] 
	I1213 19:19:17.175520  602969 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I1213 19:19:17.175628  602969 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I1213 19:19:17.175704  602969 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I1213 19:19:17.175709  602969 kubeadm.go:310] 
	I1213 19:19:17.175822  602969 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I1213 19:19:17.175932  602969 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I1213 19:19:17.175945  602969 kubeadm.go:310] 
	I1213 19:19:17.176058  602969 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token j5o3j6.zgtne4vwby5cxh24 \
	I1213 19:19:17.176186  602969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3a4ff1c2a595792db2f2ca4f26d9011086ca3d6e4619c022e611d1580ec6ebd4 \
	I1213 19:19:17.176222  602969 kubeadm.go:310] 	--control-plane 
	I1213 19:19:17.176233  602969 kubeadm.go:310] 
	I1213 19:19:17.176328  602969 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I1213 19:19:17.176337  602969 kubeadm.go:310] 
	I1213 19:19:17.176420  602969 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token j5o3j6.zgtne4vwby5cxh24 \
	I1213 19:19:17.176556  602969 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:3a4ff1c2a595792db2f2ca4f26d9011086ca3d6e4619c022e611d1580ec6ebd4 
	I1213 19:19:17.176811  602969 kubeadm.go:310] 	[WARNING SystemVerification]: failed to parse kernel config: unable to load kernel module: "configs", output: "modprobe: FATAL: Module configs not found in directory /lib/modules/5.15.0-1072-aws\n", err: exit status 1
	I1213 19:19:17.176946  602969 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I1213 19:19:17.176973  602969 cni.go:84] Creating CNI manager for ""
	I1213 19:19:17.176982  602969 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:19:17.180500  602969 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I1213 19:19:17.182504  602969 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I1213 19:19:17.186376  602969 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.31.2/kubectl ...
	I1213 19:19:17.186397  602969 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I1213 19:19:17.205884  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I1213 19:19:17.486806  602969 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I1213 19:19:17.486941  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:17.487025  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-248098 minikube.k8s.io/updated_at=2024_12_13T19_19_17_0700 minikube.k8s.io/version=v1.34.0 minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956 minikube.k8s.io/name=addons-248098 minikube.k8s.io/primary=true
	I1213 19:19:17.495888  602969 ops.go:34] apiserver oom_adj: -16
	I1213 19:19:17.641903  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:18.141946  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:18.642947  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:19.142076  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:19.642844  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:20.141993  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:20.642024  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:21.142539  602969 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.31.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I1213 19:19:21.228991  602969 kubeadm.go:1113] duration metric: took 3.742096798s to wait for elevateKubeSystemPrivileges
	I1213 19:19:21.229030  602969 kubeadm.go:394] duration metric: took 21.971375826s to StartCluster
	I1213 19:19:21.229051  602969 settings.go:142] acquiring lock: {Name:mka9b7535bd979f27733ffa8cb9f79579fa32ca5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:19:21.229190  602969 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:19:21.229583  602969 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/kubeconfig: {Name:mka5435b4dfc150b8392bc985a52cf22d376e8bc Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:19:21.230376  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I1213 19:19:21.230408  602969 start.go:235] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I1213 19:19:21.230640  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:19:21.230676  602969 addons.go:507] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:true auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volcano:true volumesnapshots:true yakd:true]
	I1213 19:19:21.230746  602969 addons.go:69] Setting yakd=true in profile "addons-248098"
	I1213 19:19:21.230759  602969 addons.go:234] Setting addon yakd=true in "addons-248098"
	I1213 19:19:21.230782  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.231257  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.231623  602969 addons.go:69] Setting inspektor-gadget=true in profile "addons-248098"
	I1213 19:19:21.231651  602969 addons.go:234] Setting addon inspektor-gadget=true in "addons-248098"
	I1213 19:19:21.231687  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.232164  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.232320  602969 addons.go:69] Setting metrics-server=true in profile "addons-248098"
	I1213 19:19:21.232341  602969 addons.go:234] Setting addon metrics-server=true in "addons-248098"
	I1213 19:19:21.232366  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.232771  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.233276  602969 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-248098"
	I1213 19:19:21.233302  602969 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-248098"
	I1213 19:19:21.233330  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.233747  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.235082  602969 addons.go:69] Setting amd-gpu-device-plugin=true in profile "addons-248098"
	I1213 19:19:21.235115  602969 addons.go:234] Setting addon amd-gpu-device-plugin=true in "addons-248098"
	I1213 19:19:21.235145  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.235594  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.236644  602969 addons.go:69] Setting registry=true in profile "addons-248098"
	I1213 19:19:21.236672  602969 addons.go:234] Setting addon registry=true in "addons-248098"
	I1213 19:19:21.236702  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.237136  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.240546  602969 addons.go:69] Setting cloud-spanner=true in profile "addons-248098"
	I1213 19:19:21.240607  602969 addons.go:234] Setting addon cloud-spanner=true in "addons-248098"
	I1213 19:19:21.240646  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.241341  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.255273  602969 addons.go:69] Setting storage-provisioner=true in profile "addons-248098"
	I1213 19:19:21.255307  602969 addons.go:234] Setting addon storage-provisioner=true in "addons-248098"
	I1213 19:19:21.255344  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.255819  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.255999  602969 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-248098"
	I1213 19:19:21.256040  602969 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-248098"
	I1213 19:19:21.256063  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.265178  602969 addons.go:69] Setting default-storageclass=true in profile "addons-248098"
	I1213 19:19:21.265277  602969 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-248098"
	I1213 19:19:21.266138  602969 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-248098"
	I1213 19:19:21.266224  602969 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-248098"
	I1213 19:19:21.266617  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.266936  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.280307  602969 addons.go:69] Setting gcp-auth=true in profile "addons-248098"
	I1213 19:19:21.284706  602969 mustload.go:65] Loading cluster: addons-248098
	I1213 19:19:21.284944  602969 config.go:182] Loaded profile config "addons-248098": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:19:21.285251  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.290520  602969 addons.go:69] Setting volcano=true in profile "addons-248098"
	I1213 19:19:21.290615  602969 addons.go:234] Setting addon volcano=true in "addons-248098"
	I1213 19:19:21.290692  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.291333  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.299441  602969 addons.go:69] Setting ingress=true in profile "addons-248098"
	I1213 19:19:21.299520  602969 addons.go:234] Setting addon ingress=true in "addons-248098"
	I1213 19:19:21.299632  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.300592  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.310583  602969 addons.go:69] Setting volumesnapshots=true in profile "addons-248098"
	I1213 19:19:21.310623  602969 addons.go:234] Setting addon volumesnapshots=true in "addons-248098"
	I1213 19:19:21.310661  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.311152  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.322127  602969 addons.go:69] Setting ingress-dns=true in profile "addons-248098"
	I1213 19:19:21.322221  602969 addons.go:234] Setting addon ingress-dns=true in "addons-248098"
	I1213 19:19:21.322359  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.322982  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.325448  602969 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I1213 19:19:21.328554  602969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:19:21.328581  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I1213 19:19:21.328649  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.333576  602969 out.go:177] * Verifying Kubernetes components...
	I1213 19:19:21.353965  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.380274  602969 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I1213 19:19:21.412609  602969 out.go:177]   - Using image docker.io/rocm/k8s-device-plugin:1.25.2.8
	I1213 19:19:21.420870  602969 addons.go:431] installing /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:19:21.420936  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/amd-gpu-device-plugin.yaml (1868 bytes)
	I1213 19:19:21.421040  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.438711  602969 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.17.0
	I1213 19:19:21.439164  602969 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.5
	I1213 19:19:21.441663  602969 out.go:177]   - Using image docker.io/registry:2.8.3
	I1213 19:19:21.442042  602969 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.35.0
	I1213 19:19:21.447223  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I1213 19:19:21.448274  602969 addons.go:431] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:19:21.448331  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I1213 19:19:21.448420  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.459755  602969 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.25
	I1213 19:19:21.462611  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-ns.yaml
	I1213 19:19:21.462680  602969 ssh_runner.go:362] scp yakd/yakd-ns.yaml --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I1213 19:19:21.462784  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.463994  602969 addons.go:431] installing /etc/kubernetes/addons/deployment.yaml
	I1213 19:19:21.464049  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I1213 19:19:21.464133  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.482413  602969 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.2
	I1213 19:19:21.482669  602969 addons.go:431] installing /etc/kubernetes/addons/ig-crd.yaml
	I1213 19:19:21.482683  602969 ssh_runner.go:362] scp inspektor-gadget/ig-crd.yaml --> /etc/kubernetes/addons/ig-crd.yaml (5248 bytes)
	I1213 19:19:21.482759  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.498395  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.11.3
	I1213 19:19:21.503674  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:21.508350  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:21.508597  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I1213 19:19:21.508617  602969 ssh_runner.go:362] scp metrics-server/metrics-apiservice.yaml --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I1213 19:19:21.508684  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.483797  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.495853  602969 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-248098"
	I1213 19:19:21.510702  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.511144  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	W1213 19:19:21.523070  602969 out.go:270] ! Enabling 'volcano' returned an error: running callbacks: [volcano addon does not support crio]
	I1213 19:19:21.495897  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I1213 19:19:21.525493  602969 ssh_runner.go:362] scp volumesnapshots/csi-hostpath-snapshotclass.yaml --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I1213 19:19:21.525560  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.531786  602969 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.3
	I1213 19:19:21.534286  602969 addons.go:431] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:19:21.534309  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I1213 19:19:21.534383  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.546429  602969 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.8
	I1213 19:19:21.548949  602969 addons.go:431] installing /etc/kubernetes/addons/registry-rc.yaml
	I1213 19:19:21.548974  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (860 bytes)
	I1213 19:19:21.549037  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.570396  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.577265  602969 addons.go:431] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:19:21.577286  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I1213 19:19:21.577348  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.591406  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I1213 19:19:21.593808  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I1213 19:19:21.595784  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I1213 19:19:21.603024  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I1213 19:19:21.605198  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I1213 19:19:21.607893  602969 addons.go:234] Setting addon default-storageclass=true in "addons-248098"
	I1213 19:19:21.607928  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:21.608339  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:21.610620  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.613014  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I1213 19:19:21.615144  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I1213 19:19:21.615254  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.623841  602969 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I1213 19:19:21.630322  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I1213 19:19:21.630356  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-attacher.yaml --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I1213 19:19:21.630446  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.643406  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.715448  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.716808  602969 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I1213 19:19:21.719377  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.720893  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.722077  602969 out.go:177]   - Using image docker.io/busybox:stable
	I1213 19:19:21.724551  602969 addons.go:431] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:19:21.724577  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I1213 19:19:21.724644  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.786959  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.806260  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.807337  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.813786  602969 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I1213 19:19:21.813806  602969 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I1213 19:19:21.813880  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:21.815114  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.831472  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.837880  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.856326  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I1213 19:19:21.869818  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:21.923870  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml
	I1213 19:19:22.001811  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I1213 19:19:22.058636  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-sa.yaml
	I1213 19:19:22.058671  602969 ssh_runner.go:362] scp yakd/yakd-sa.yaml --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I1213 19:19:22.180538  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I1213 19:19:22.180560  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I1213 19:19:22.187506  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I1213 19:19:22.187588  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotclasses.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I1213 19:19:22.221301  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I1213 19:19:22.221406  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-hostpath.yaml --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I1213 19:19:22.245357  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I1213 19:19:22.251332  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-crb.yaml
	I1213 19:19:22.251408  602969 ssh_runner.go:362] scp yakd/yakd-crb.yaml --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I1213 19:19:22.272828  602969 addons.go:431] installing /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:19:22.272865  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-deployment.yaml (14576 bytes)
	I1213 19:19:22.287633  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I1213 19:19:22.292810  602969 addons.go:431] installing /etc/kubernetes/addons/registry-svc.yaml
	I1213 19:19:22.292893  602969 ssh_runner.go:362] scp registry/registry-svc.yaml --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I1213 19:19:22.302161  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I1213 19:19:22.302249  602969 ssh_runner.go:362] scp metrics-server/metrics-server-rbac.yaml --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I1213 19:19:22.308889  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I1213 19:19:22.308959  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshotcontents.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I1213 19:19:22.367800  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I1213 19:19:22.367888  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-health-monitor-controller.yaml --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I1213 19:19:22.381548  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I1213 19:19:22.418969  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I1213 19:19:22.427226  602969 addons.go:431] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:19:22.427295  602969 ssh_runner.go:362] scp metrics-server/metrics-server-service.yaml --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I1213 19:19:22.460529  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I1213 19:19:22.487312  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-svc.yaml
	I1213 19:19:22.487419  602969 ssh_runner.go:362] scp yakd/yakd-svc.yaml --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I1213 19:19:22.492906  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I1213 19:19:22.492991  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-provisioner.yaml --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I1213 19:19:22.502366  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml
	I1213 19:19:22.513766  602969 addons.go:431] installing /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:19:22.513843  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I1213 19:19:22.550994  602969 addons.go:431] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I1213 19:19:22.551082  602969 ssh_runner.go:362] scp volumesnapshots/snapshot.storage.k8s.io_volumesnapshots.yaml --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I1213 19:19:22.576625  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I1213 19:19:22.622394  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I1213 19:19:22.622461  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-resizer.yaml --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I1213 19:19:22.667331  602969 addons.go:431] installing /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:19:22.667409  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I1213 19:19:22.686529  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I1213 19:19:22.727043  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I1213 19:19:22.727128  602969 ssh_runner.go:362] scp volumesnapshots/rbac-volume-snapshot-controller.yaml --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I1213 19:19:22.730868  602969 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (1.500445437s)
	I1213 19:19:22.730988  602969 ssh_runner.go:235] Completed: sudo systemctl daemon-reload: (1.350672368s)
	I1213 19:19:22.731177  602969 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I1213 19:19:22.731216  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I1213 19:19:22.814545  602969 addons.go:431] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I1213 19:19:22.814622  602969 ssh_runner.go:362] scp csi-hostpath-driver/rbac/rbac-external-snapshotter.yaml --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I1213 19:19:22.866479  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I1213 19:19:22.952807  602969 addons.go:431] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:22.952888  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I1213 19:19:23.027535  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I1213 19:19:23.027616  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I1213 19:19:23.096867  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:23.140904  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I1213 19:19:23.140974  602969 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-driverinfo.yaml --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I1213 19:19:23.213097  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I1213 19:19:23.213164  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I1213 19:19:23.284483  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I1213 19:19:23.284559  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I1213 19:19:23.313960  602969 addons.go:431] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:19:23.314030  602969 ssh_runner.go:362] scp csi-hostpath-driver/deploy/csi-hostpath-storageclass.yaml --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I1213 19:19:23.403457  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I1213 19:19:26.482868  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/amd-gpu-device-plugin.yaml: (4.558964552s)
	I1213 19:19:26.482971  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (4.481077701s)
	I1213 19:19:26.483045  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (4.237614796s)
	I1213 19:19:26.483100  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (4.195445671s)
	I1213 19:19:26.483176  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (4.626823049s)
	I1213 19:19:26.653244  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (4.271604446s)
	I1213 19:19:26.653524  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (4.234460463s)
	W1213 19:19:26.730600  602969 out.go:270] ! Enabling 'storage-provisioner-rancher' returned an error: running callbacks: [Error making local-path the default storage class: Error while marking storage class local-path as default: Operation cannot be fulfilled on storageclasses.storage.k8s.io "local-path": the object has been modified; please apply your changes to the latest version and try again]
	I1213 19:19:28.119649  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (5.659025174s)
	I1213 19:19:28.119679  602969 addons.go:475] Verifying addon ingress=true in "addons-248098"
	I1213 19:19:28.119926  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-deployment.yaml: (5.617462835s)
	I1213 19:19:28.119999  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (5.543296224s)
	I1213 19:19:28.120008  602969 addons.go:475] Verifying addon metrics-server=true in "addons-248098"
	I1213 19:19:28.120034  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (5.433440279s)
	I1213 19:19:28.120042  602969 addons.go:475] Verifying addon registry=true in "addons-248098"
	I1213 19:19:28.120316  602969 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.49.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.31.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.389064075s)
	I1213 19:19:28.120347  602969 start.go:971] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS's ConfigMap
	I1213 19:19:28.121375  602969 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.390177239s)
	I1213 19:19:28.122131  602969 node_ready.go:35] waiting up to 6m0s for node "addons-248098" to be "Ready" ...
	I1213 19:19:28.122346  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (5.255778687s)
	I1213 19:19:28.122682  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (5.025719663s)
	W1213 19:19:28.122720  602969 addons.go:457] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:19:28.122738  602969 retry.go:31] will retry after 368.887977ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I1213 19:19:28.122848  602969 out.go:177] * Verifying ingress addon...
	I1213 19:19:28.122940  602969 out.go:177] * Verifying registry addon...
	I1213 19:19:28.125945  602969 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-248098 service yakd-dashboard -n yakd-dashboard
	
	I1213 19:19:28.126854  602969 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I1213 19:19:28.127952  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I1213 19:19:28.178514  602969 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:19:28.178607  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:28.181483  602969 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I1213 19:19:28.181566  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:28.491896  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I1213 19:19:28.653877  602969 kapi.go:214] "coredns" deployment in "kube-system" namespace and "addons-248098" context rescaled to 1 replicas
	I1213 19:19:28.656611  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:28.656806  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:28.999336  602969 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (5.595772838s)
	I1213 19:19:28.999422  602969 addons.go:475] Verifying addon csi-hostpath-driver=true in "addons-248098"
	I1213 19:19:29.004093  602969 out.go:177] * Verifying csi-hostpath-driver addon...
	I1213 19:19:29.007703  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I1213 19:19:29.027443  602969 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:19:29.027466  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:29.141297  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:29.142221  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:29.512286  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:29.634043  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:29.635325  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:30.018614  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:30.126763  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:30.141230  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:30.144333  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:30.511781  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:30.631473  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:30.632040  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.013033  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:31.131738  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:31.132589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.512185  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:31.631006  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:31.631908  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:31.862556  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I1213 19:19:31.862644  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:31.881197  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:31.993912  602969 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I1213 19:19:32.015203  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:32.029154  602969 addons.go:234] Setting addon gcp-auth=true in "addons-248098"
	I1213 19:19:32.029264  602969 host.go:66] Checking if "addons-248098" exists ...
	I1213 19:19:32.029785  602969 cli_runner.go:164] Run: docker container inspect addons-248098 --format={{.State.Status}}
	I1213 19:19:32.059441  602969 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I1213 19:19:32.059508  602969 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" addons-248098
	I1213 19:19:32.078992  602969 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33512 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/addons-248098/id_rsa Username:docker}
	I1213 19:19:32.131417  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:32.131771  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:32.132258  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:32.192934  602969 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.4
	I1213 19:19:32.195358  602969 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.3
	I1213 19:19:32.197626  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I1213 19:19:32.197657  602969 ssh_runner.go:362] scp gcp-auth/gcp-auth-ns.yaml --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I1213 19:19:32.216983  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I1213 19:19:32.217006  602969 ssh_runner.go:362] scp gcp-auth/gcp-auth-service.yaml --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I1213 19:19:32.235808  602969 addons.go:431] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:19:32.235833  602969 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I1213 19:19:32.255597  602969 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.31.2/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I1213 19:19:32.513233  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:32.636785  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:32.637301  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:32.775691  602969 addons.go:475] Verifying addon gcp-auth=true in "addons-248098"
	I1213 19:19:32.779925  602969 out.go:177] * Verifying gcp-auth addon...
	I1213 19:19:32.785024  602969 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I1213 19:19:32.816723  602969 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I1213 19:19:32.816751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:33.018884  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:33.131804  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:33.133704  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:33.289115  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:33.511882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:33.631923  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:33.632439  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:33.788543  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:34.012574  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:34.131105  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:34.131521  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:34.288721  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:34.511845  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:34.626878  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:34.631341  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:34.633312  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:34.789417  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:35.015921  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:35.131195  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:35.131570  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:35.289725  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:35.511913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:35.631394  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:35.632893  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:35.788325  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:36.012571  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:36.131216  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:36.132275  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:36.288541  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:36.512198  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:36.631774  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:36.633266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:36.788674  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:37.014562  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:37.125705  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:37.131666  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:37.132381  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:37.289036  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:37.512079  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:37.631486  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:37.632552  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:37.788860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:38.013292  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:38.131696  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:38.132255  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:38.288818  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:38.511431  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:38.631017  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:38.631919  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:38.789005  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:39.013018  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:39.125807  602969 node_ready.go:53] node "addons-248098" has status "Ready":"False"
	I1213 19:19:39.131912  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:39.132342  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:39.288913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:39.546103  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:39.634124  602969 node_ready.go:49] node "addons-248098" has status "Ready":"True"
	I1213 19:19:39.634153  602969 node_ready.go:38] duration metric: took 11.511992619s for node "addons-248098" to be "Ready" ...
	I1213 19:19:39.634164  602969 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:19:39.648967  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:39.653911  602969 pod_ready.go:79] waiting up to 6m0s for pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:39.656285  602969 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I1213 19:19:39.656313  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:39.871358  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:40.068754  602969 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I1213 19:19:40.068783  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:40.175155  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:40.176729  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:40.324497  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:40.514084  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:40.631680  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:40.632493  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:40.794699  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:41.015953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:41.132088  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:41.132663  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:41.160673  602969 pod_ready.go:93] pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.160700  602969 pod_ready.go:82] duration metric: took 1.506750951s for pod "coredns-7c65d6cfc9-bt6ls" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.160728  602969 pod_ready.go:79] waiting up to 6m0s for pod "etcd-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.167909  602969 pod_ready.go:93] pod "etcd-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.167936  602969 pod_ready.go:82] duration metric: took 7.198218ms for pod "etcd-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.167950  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.175836  602969 pod_ready.go:93] pod "kube-apiserver-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.175861  602969 pod_ready.go:82] duration metric: took 7.896877ms for pod "kube-apiserver-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.175876  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.188852  602969 pod_ready.go:93] pod "kube-controller-manager-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.188879  602969 pod_ready.go:82] duration metric: took 12.994611ms for pod "kube-controller-manager-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.188894  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-rcbrb" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.232429  602969 pod_ready.go:93] pod "kube-proxy-rcbrb" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.232451  602969 pod_ready.go:82] duration metric: took 43.55018ms for pod "kube-proxy-rcbrb" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.232462  602969 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.289689  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:41.513507  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:41.626175  602969 pod_ready.go:93] pod "kube-scheduler-addons-248098" in "kube-system" namespace has status "Ready":"True"
	I1213 19:19:41.626347  602969 pod_ready.go:82] duration metric: took 393.875067ms for pod "kube-scheduler-addons-248098" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.626366  602969 pod_ready.go:79] waiting up to 6m0s for pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace to be "Ready" ...
	I1213 19:19:41.635558  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:41.637958  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:41.788507  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:42.026734  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:42.137519  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:42.139399  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:42.289568  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:42.515967  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:42.649268  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:42.651606  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:42.789418  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:43.014152  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:43.133188  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:43.134981  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:43.290443  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:43.512663  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:43.632511  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:43.634455  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:43.635299  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:43.789909  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:44.014063  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:44.133266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:44.134645  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:44.288648  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:44.512023  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:44.644352  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:44.646135  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:44.792681  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:45.023186  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:45.149208  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:45.150477  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:45.291751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:45.512953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:45.637673  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:45.638589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:45.640894  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:45.796714  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:46.016964  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:46.143385  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:46.145482  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:46.289284  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:46.512969  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:46.637122  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:46.641525  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:46.788321  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:47.016793  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:47.139112  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:47.141348  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:47.288749  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:47.513840  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:47.633496  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:47.637674  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:47.789373  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:48.015282  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:48.133087  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:48.134694  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:48.136494  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:48.289117  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:48.512513  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:48.653344  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:48.660119  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:48.790039  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:49.014997  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:49.144771  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:49.148704  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:49.289337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:49.512915  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:49.637487  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:49.637748  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:49.795640  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:50.033774  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:50.144261  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:50.144566  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:50.146679  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:50.288390  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:50.514008  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:50.648275  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:50.649898  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:50.788546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:51.014168  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:51.134498  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:51.135723  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:51.293382  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:51.514369  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:51.637001  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:51.639183  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:51.789516  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:52.018454  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:52.132634  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:52.134129  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:52.289744  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:52.512798  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:52.647245  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:52.648787  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:52.653226  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:52.788935  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:53.014878  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:53.135610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:53.138325  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:53.289605  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:53.513675  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:53.633387  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:53.636095  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:53.788718  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:54.020268  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:54.132212  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:54.132739  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:54.288627  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:54.513437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:54.639017  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:54.639361  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:54.788600  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:55.019780  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:55.135485  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:55.136975  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:55.143199  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:55.288914  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:55.514550  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:55.633239  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:55.634475  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:55.789007  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:56.017594  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:56.133967  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:56.134589  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:56.288756  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:56.512643  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:56.632760  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:56.635279  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:56.788398  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:57.013827  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:57.132172  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:57.133877  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:57.288213  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:57.513044  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:57.633465  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:57.634971  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:57.638124  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:57.789388  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:58.013941  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:58.133369  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:58.134953  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:58.289323  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:58.513659  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:58.633687  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:58.636209  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:58.789326  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:59.013883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:59.155798  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:59.156038  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:59.288594  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:19:59.512330  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:19:59.638783  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:19:59.640044  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:19:59.640781  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:19:59.790902  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:00.070081  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:00.156179  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:00.166602  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:00.316992  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:00.527241  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:00.676815  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:00.712768  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:00.847142  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:01.020393  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:01.152526  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:01.167867  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:01.289311  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:01.512968  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:01.634331  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:01.637221  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:01.789851  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:02.029159  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:02.137616  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:02.148018  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:02.150699  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:02.289957  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:02.521936  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:02.635028  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:02.640003  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:02.791133  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:03.015522  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:03.132739  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:03.133141  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:03.290347  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:03.512827  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:03.635478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:03.636481  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:03.790544  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:04.014368  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:04.135801  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:04.137715  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:04.289648  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:04.512571  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:04.640686  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:04.642543  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:04.644580  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:04.812287  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:05.044546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:05.136402  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:05.143417  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:05.289020  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:05.514026  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:05.638751  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:05.640261  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:05.793173  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:06.015932  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:06.133981  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:06.135052  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:06.288936  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:06.513436  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:06.633907  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:06.635179  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:06.789073  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:07.015510  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:07.136087  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:07.136169  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:07.139564  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:07.289141  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:07.513160  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:07.633094  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:07.634880  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:07.790913  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:08.014398  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:08.137505  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:08.140139  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:08.289634  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:08.513350  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:08.632550  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:08.634494  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:08.788437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:09.016508  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:09.139393  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:09.141910  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:09.155680  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:09.289744  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:09.514429  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:09.634616  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:09.635300  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:09.793078  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:10.026954  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:10.132998  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:10.134060  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:10.289024  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:10.513687  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:10.633539  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:10.634473  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:10.788547  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:11.014898  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:11.135610  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:11.138976  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:11.288860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:11.513364  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:11.632145  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:11.633274  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:11.633863  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:11.789034  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:12.023861  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:12.139256  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:12.140575  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:12.289201  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:12.517252  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:12.633790  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:12.635523  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:12.789376  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:13.016912  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:13.139113  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:13.141565  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:13.288941  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:13.518018  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:13.632009  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:13.634862  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:13.634895  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:13.788636  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:14.018063  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:14.135813  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:14.136337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I1213 19:20:14.289217  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:14.513284  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:14.633631  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:14.635788  602969 kapi.go:107] duration metric: took 46.507833164s to wait for kubernetes.io/minikube-addons=registry ...
	I1213 19:20:14.789381  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:15.029174  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:15.141655  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:15.289034  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:15.513478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:15.633942  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:15.789331  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:16.015948  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:16.136565  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:16.145005  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:16.288915  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:16.513700  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:16.636525  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:16.789727  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:17.014867  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:17.137578  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:17.289841  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:17.513784  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:17.633105  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:17.789484  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:18.022624  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:18.140248  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:18.288599  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:18.513057  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:18.632575  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:18.640037  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:18.789504  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:19.022468  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:19.133212  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:19.289041  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:19.513478  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:19.632476  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:19.789055  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:20.020624  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:20.143687  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:20.290244  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:20.513839  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:20.633022  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:20.789103  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:21.030957  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:21.137112  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:21.139362  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:21.288758  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:21.513599  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:21.634029  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:21.789807  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:22.017041  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:22.133097  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:22.304824  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:22.513267  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:22.633313  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:22.788346  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:23.017822  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:23.131759  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:23.289610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:23.513266  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:23.636653  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:23.638376  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:23.789193  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:24.020867  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:24.135162  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:24.290797  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:24.521966  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:24.642084  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:24.788736  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:25.015145  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:25.135198  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:25.289972  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:25.513394  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:25.634048  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:25.789882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:26.014546  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:26.136141  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:26.146013  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:26.293030  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:26.518439  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:26.634951  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:26.792262  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:27.013288  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:27.132968  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:27.289216  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:27.514447  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:27.632103  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:27.789207  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:28.017253  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:28.145210  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:28.289132  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:28.520569  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:28.635024  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:28.635875  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:28.789278  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:29.014206  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:29.139383  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:29.288706  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:29.513883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:29.634664  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:29.791618  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:30.020353  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:30.139727  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:30.288465  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:30.515253  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:30.635961  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:30.637639  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:30.789132  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:31.015337  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:31.134245  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:31.289615  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:31.514090  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:31.633246  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:31.789185  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:32.015503  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:32.131557  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:32.289416  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:32.518303  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:32.640843  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:32.643531  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:32.789628  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:33.018720  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:33.134361  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:33.288610  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:33.512882  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:33.633328  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:33.789183  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:34.013826  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:34.133333  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:34.288683  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:34.513055  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:34.638394  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:34.789863  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:35.018056  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:35.134152  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:35.134873  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:35.288402  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:35.514464  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:35.641587  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:35.790447  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:36.034212  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:36.136737  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:36.288750  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:36.514366  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:36.638336  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:36.800939  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:37.025466  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:37.137327  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:37.138588  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:37.289196  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:37.512883  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:37.652347  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:37.791008  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:38.014218  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:38.134012  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:38.300160  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:38.523258  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:38.647934  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:38.804079  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:39.066002  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:39.214225  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:39.228628  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:39.293700  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:39.513347  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:39.632635  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:39.793893  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:40.034033  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:40.142980  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:40.290171  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:40.512997  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:40.637209  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:40.789419  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:41.015513  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:41.134702  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:41.292884  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:41.513085  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:41.639119  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:41.645783  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:41.790097  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:42.015437  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:42.133077  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:42.289343  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:42.513468  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:42.634784  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:42.789568  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:43.024691  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:43.134400  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:43.291754  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:43.516872  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:43.635803  602969 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I1213 19:20:43.790316  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:44.015891  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:44.133459  602969 kapi.go:107] duration metric: took 1m16.006613767s to wait for app.kubernetes.io/name=ingress-nginx ...
	I1213 19:20:44.136564  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:44.289895  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:44.512860  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:44.789321  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:45.099748  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:45.304635  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:45.513890  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:45.789078  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:46.017143  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:46.289593  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:46.512358  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:46.634782  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:46.789287  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:47.013249  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:47.289667  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:47.513740  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:47.789566  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:48.033765  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:48.292904  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:48.515082  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:48.636139  602969 pod_ready.go:103] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"False"
	I1213 19:20:48.789523  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I1213 19:20:49.013959  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:49.290131  602969 kapi.go:107] duration metric: took 1m16.505104895s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I1213 19:20:49.292413  602969 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-248098 cluster.
	I1213 19:20:49.294959  602969 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I1213 19:20:49.297342  602969 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I1213 19:20:49.512561  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.051929  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.159213  602969 pod_ready.go:93] pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace has status "Ready":"True"
	I1213 19:20:50.159240  602969 pod_ready.go:82] duration metric: took 1m8.532866479s for pod "metrics-server-84c5f94fbc-g7jcr" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.159254  602969 pod_ready.go:79] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.185362  602969 pod_ready.go:93] pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace has status "Ready":"True"
	I1213 19:20:50.185387  602969 pod_ready.go:82] duration metric: took 26.125113ms for pod "nvidia-device-plugin-daemonset-xsrsn" in "kube-system" namespace to be "Ready" ...
	I1213 19:20:50.185410  602969 pod_ready.go:39] duration metric: took 1m10.551212061s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I1213 19:20:50.185430  602969 api_server.go:52] waiting for apiserver process to appear ...
	I1213 19:20:50.185462  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:20:50.185531  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:20:50.294525  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:20:50.294549  602969 cri.go:89] found id: ""
	I1213 19:20:50.294557  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:20:50.294618  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.304608  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:20:50.304682  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:20:50.362212  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:20:50.362235  602969 cri.go:89] found id: ""
	I1213 19:20:50.362243  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:20:50.362329  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.366049  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:20:50.366120  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:20:50.470834  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:20:50.470859  602969 cri.go:89] found id: ""
	I1213 19:20:50.470867  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:20:50.470921  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.503447  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:20:50.503522  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:20:50.517428  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:50.605090  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:20:50.605116  602969 cri.go:89] found id: ""
	I1213 19:20:50.605134  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:20:50.605196  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.610821  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:20:50.610898  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:20:50.690567  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:20:50.690647  602969 cri.go:89] found id: ""
	I1213 19:20:50.690662  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:20:50.690732  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.695050  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:20:50.695158  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:20:50.741497  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:20:50.741522  602969 cri.go:89] found id: ""
	I1213 19:20:50.741531  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:20:50.741591  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.745570  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:20:50.745648  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:20:50.791676  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:20:50.791699  602969 cri.go:89] found id: ""
	I1213 19:20:50.791707  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:20:50.791768  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:20:50.802647  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:20:50.802675  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:20:50.885177  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:20:50.885587  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:20:50.911543  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:20:50.911946  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:20:50.975179  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:20:50.975311  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:20:51.015960  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:51.218135  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:20:51.218208  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:20:51.297330  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:20:51.297371  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:20:51.364315  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:20:51.364352  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:20:51.419594  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:20:51.419625  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:20:51.513154  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:51.528485  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:20:51.528569  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:20:51.546548  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:20:51.546579  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:20:51.597667  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:20:51.597699  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:20:51.650955  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:20:51.651038  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:20:51.747175  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:20:51.747210  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:20:51.795094  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:20:51.795127  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:20:51.851701  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:20:51.851732  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:20:51.851816  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:20:51.851832  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:20:51.851839  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:20:51.851850  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:20:51.851875  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:20:51.851882  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:20:51.851888  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:20:52.014612  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:52.515749  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:53.014362  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:53.513704  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:54.014702  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:54.513764  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:55.015201  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:55.584150  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:56.014613  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:56.513381  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:57.013544  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:57.514445  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:58.026351  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:58.514303  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:59.012890  602969 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I1213 19:20:59.512881  602969 kapi.go:107] duration metric: took 1m30.505185227s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I1213 19:20:59.515617  602969 out.go:177] * Enabled addons: amd-gpu-device-plugin, nvidia-device-plugin, ingress-dns, cloud-spanner, storage-provisioner, default-storageclass, inspektor-gadget, metrics-server, yakd, volumesnapshots, registry, ingress, gcp-auth, csi-hostpath-driver
	I1213 19:20:59.518027  602969 addons.go:510] duration metric: took 1m38.287350638s for enable addons: enabled=[amd-gpu-device-plugin nvidia-device-plugin ingress-dns cloud-spanner storage-provisioner default-storageclass inspektor-gadget metrics-server yakd volumesnapshots registry ingress gcp-auth csi-hostpath-driver]
	I1213 19:21:01.853210  602969 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:21:01.867505  602969 api_server.go:72] duration metric: took 1m40.637063007s to wait for apiserver process to appear ...
	I1213 19:21:01.867534  602969 api_server.go:88] waiting for apiserver healthz status ...
	I1213 19:21:01.868050  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:21:01.868129  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:21:01.908108  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:01.908133  602969 cri.go:89] found id: ""
	I1213 19:21:01.908141  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:21:01.908199  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.912369  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:21:01.912453  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:21:01.952191  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:01.952214  602969 cri.go:89] found id: ""
	I1213 19:21:01.952223  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:21:01.952279  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.955874  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:21:01.955949  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:21:01.995630  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:01.995655  602969 cri.go:89] found id: ""
	I1213 19:21:01.995663  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:21:01.995723  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:01.999503  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:21:01.999589  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:21:02.046099  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:02.046123  602969 cri.go:89] found id: ""
	I1213 19:21:02.046131  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:21:02.046193  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.050255  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:21:02.050412  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:21:02.092267  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:02.092292  602969 cri.go:89] found id: ""
	I1213 19:21:02.092300  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:21:02.092389  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.096421  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:21:02.096586  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:21:02.137435  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:02.137511  602969 cri.go:89] found id: ""
	I1213 19:21:02.137535  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:21:02.137622  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.141668  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:21:02.141786  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:21:02.183547  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:02.183576  602969 cri.go:89] found id: ""
	I1213 19:21:02.183585  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:21:02.183701  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:02.188103  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:21:02.188132  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:02.241298  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:21:02.241332  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:02.289740  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:21:02.289774  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:21:02.345674  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.345950  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:02.361436  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.361671  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:02.401354  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:21:02.401382  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:21:02.419350  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:21:02.419387  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:21:02.572424  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:21:02.572457  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:02.641829  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:21:02.641866  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:21:02.736850  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:21:02.736888  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:21:02.788016  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:21:02.788052  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:02.830967  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:21:02.830999  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:02.869566  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:21:02.869595  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:02.939984  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:21:02.940026  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:02.992970  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:02.993000  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:21:02.993059  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:21:02.993200  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.993218  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:02.993230  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:02.993247  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:02.993261  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:02.993268  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:21:12.994733  602969 api_server.go:253] Checking apiserver healthz at https://192.168.49.2:8443/healthz ...
	I1213 19:21:13.008029  602969 api_server.go:279] https://192.168.49.2:8443/healthz returned 200:
	ok
	I1213 19:21:13.011610  602969 api_server.go:141] control plane version: v1.31.2
	I1213 19:21:13.011651  602969 api_server.go:131] duration metric: took 11.144109173s to wait for apiserver health ...
	I1213 19:21:13.011662  602969 system_pods.go:43] waiting for kube-system pods to appear ...
	I1213 19:21:13.011689  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I1213 19:21:13.011757  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I1213 19:21:13.055983  602969 cri.go:89] found id: "27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:13.056004  602969 cri.go:89] found id: ""
	I1213 19:21:13.056012  602969 logs.go:282] 1 containers: [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d]
	I1213 19:21:13.056076  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.060197  602969 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I1213 19:21:13.060272  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I1213 19:21:13.114407  602969 cri.go:89] found id: "289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:13.114431  602969 cri.go:89] found id: ""
	I1213 19:21:13.114438  602969 logs.go:282] 1 containers: [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a]
	I1213 19:21:13.114500  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.118390  602969 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I1213 19:21:13.118525  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I1213 19:21:13.162684  602969 cri.go:89] found id: "d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:13.162717  602969 cri.go:89] found id: ""
	I1213 19:21:13.162726  602969 logs.go:282] 1 containers: [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411]
	I1213 19:21:13.162789  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.166866  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I1213 19:21:13.166956  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I1213 19:21:13.220934  602969 cri.go:89] found id: "833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:13.220980  602969 cri.go:89] found id: ""
	I1213 19:21:13.220989  602969 logs.go:282] 1 containers: [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f]
	I1213 19:21:13.221090  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.228707  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I1213 19:21:13.228829  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I1213 19:21:13.289311  602969 cri.go:89] found id: "1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:13.289342  602969 cri.go:89] found id: ""
	I1213 19:21:13.289352  602969 logs.go:282] 1 containers: [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102]
	I1213 19:21:13.289424  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.294609  602969 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I1213 19:21:13.294728  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I1213 19:21:13.366508  602969 cri.go:89] found id: "4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:13.366557  602969 cri.go:89] found id: ""
	I1213 19:21:13.366567  602969 logs.go:282] 1 containers: [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a]
	I1213 19:21:13.366656  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.372576  602969 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I1213 19:21:13.372670  602969 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I1213 19:21:13.416348  602969 cri.go:89] found id: "da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:13.416425  602969 cri.go:89] found id: ""
	I1213 19:21:13.416449  602969 logs.go:282] 1 containers: [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3]
	I1213 19:21:13.416529  602969 ssh_runner.go:195] Run: which crictl
	I1213 19:21:13.420352  602969 logs.go:123] Gathering logs for kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] ...
	I1213 19:21:13.420391  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f"
	I1213 19:21:13.474383  602969 logs.go:123] Gathering logs for kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] ...
	I1213 19:21:13.474428  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3"
	I1213 19:21:13.522419  602969 logs.go:123] Gathering logs for CRI-O ...
	I1213 19:21:13.522452  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I1213 19:21:13.617192  602969 logs.go:123] Gathering logs for dmesg ...
	I1213 19:21:13.617230  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I1213 19:21:13.634890  602969 logs.go:123] Gathering logs for describe nodes ...
	I1213 19:21:13.634918  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.31.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I1213 19:21:13.784911  602969 logs.go:123] Gathering logs for coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] ...
	I1213 19:21:13.784948  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411"
	I1213 19:21:13.828683  602969 logs.go:123] Gathering logs for kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] ...
	I1213 19:21:13.828720  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102"
	I1213 19:21:13.868810  602969 logs.go:123] Gathering logs for kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] ...
	I1213 19:21:13.868851  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a"
	I1213 19:21:13.966616  602969 logs.go:123] Gathering logs for container status ...
	I1213 19:21:13.966653  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I1213 19:21:14.036058  602969 logs.go:123] Gathering logs for kubelet ...
	I1213 19:21:14.036098  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	W1213 19:21:14.100263  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.100509  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:14.115979  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.116213  602969 logs.go:138] Found kubelet problem: Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:14.156559  602969 logs.go:123] Gathering logs for kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] ...
	I1213 19:21:14.156587  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d"
	I1213 19:21:14.211656  602969 logs.go:123] Gathering logs for etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] ...
	I1213 19:21:14.211697  602969 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a"
	I1213 19:21:14.260185  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:14.260216  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W1213 19:21:14.260273  602969 out.go:270] X Problems detected in kubelet:
	W1213 19:21:14.260311  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: W1213 19:19:23.865745    1527 reflector.go:561] object-"kube-system"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.260318  602969 out.go:270]   Dec 13 19:19:23 addons-248098 kubelet[1527]: E1213 19:19:23.865827    1527 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	W1213 19:21:14.260325  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: W1213 19:19:39.477016    1527 reflector.go:561] object-"gcp-auth"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:addons-248098" cannot list resource "configmaps" in API group "" in the namespace "gcp-auth": no relationship found between node 'addons-248098' and this object
	W1213 19:21:14.260332  602969 out.go:270]   Dec 13 19:19:39 addons-248098 kubelet[1527]: E1213 19:19:39.477065    1527 reflector.go:158] "Unhandled Error" err="object-\"gcp-auth\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:addons-248098\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"gcp-auth\": no relationship found between node 'addons-248098' and this object" logger="UnhandledError"
	I1213 19:21:14.260337  602969 out.go:358] Setting ErrFile to fd 2...
	I1213 19:21:14.260346  602969 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:21:24.272192  602969 system_pods.go:59] 18 kube-system pods found
	I1213 19:21:24.272235  602969 system_pods.go:61] "coredns-7c65d6cfc9-bt6ls" [23b8e6b9-51eb-4a14-bee8-7eacdb154832] Running
	I1213 19:21:24.272242  602969 system_pods.go:61] "csi-hostpath-attacher-0" [98592c8c-f15c-40c5-831b-2239874143ea] Running
	I1213 19:21:24.272247  602969 system_pods.go:61] "csi-hostpath-resizer-0" [14cdb963-4eb9-4472-8a01-549e09a55047] Running
	I1213 19:21:24.272255  602969 system_pods.go:61] "csi-hostpathplugin-l2fk7" [30df306a-dc88-4eb0-aa19-d35529eda401] Running
	I1213 19:21:24.272260  602969 system_pods.go:61] "etcd-addons-248098" [014814e1-1087-4331-aeb4-7fd59c3165e5] Running
	I1213 19:21:24.272264  602969 system_pods.go:61] "kindnet-n9pvh" [7e6398f0-53e1-4774-bdd6-211a800d8291] Running
	I1213 19:21:24.272268  602969 system_pods.go:61] "kube-apiserver-addons-248098" [a3e569f6-6078-4dc0-a3b2-764a0180614c] Running
	I1213 19:21:24.272273  602969 system_pods.go:61] "kube-controller-manager-addons-248098" [b6473627-2b96-431a-9082-99576908ad11] Running
	I1213 19:21:24.272284  602969 system_pods.go:61] "kube-ingress-dns-minikube" [53321af4-b841-467d-af38-89b82188ff1d] Running
	I1213 19:21:24.272289  602969 system_pods.go:61] "kube-proxy-rcbrb" [fb396ab8-720d-41c3-9d2b-d1b2fb666b0b] Running
	I1213 19:21:24.272296  602969 system_pods.go:61] "kube-scheduler-addons-248098" [ac75ce0f-098a-4f6d-9e98-697f3b89e854] Running
	I1213 19:21:24.272300  602969 system_pods.go:61] "metrics-server-84c5f94fbc-g7jcr" [a41f7493-f390-4111-9ecf-6b9c91d88986] Running
	I1213 19:21:24.272305  602969 system_pods.go:61] "nvidia-device-plugin-daemonset-xsrsn" [bfc935e3-d013-494e-8380-5b4be1f7a0c9] Running
	I1213 19:21:24.272312  602969 system_pods.go:61] "registry-5cc95cd69-5n4c9" [7ec0f719-ff86-4cc0-9868-18a171b8d618] Running
	I1213 19:21:24.272316  602969 system_pods.go:61] "registry-proxy-nvc8d" [c14eabdb-94a1-4ed0-8a97-51210e96f13a] Running
	I1213 19:21:24.272321  602969 system_pods.go:61] "snapshot-controller-56fcc65765-ltsx9" [7191195a-2231-4fe5-9bf3-ba875b3ceeb5] Running
	I1213 19:21:24.272335  602969 system_pods.go:61] "snapshot-controller-56fcc65765-sqhl4" [a11c4e23-9e52-4164-b6f3-f29f74154fab] Running
	I1213 19:21:24.272339  602969 system_pods.go:61] "storage-provisioner" [1d273a3f-36bb-4847-ad88-3544cda8cde5] Running
	I1213 19:21:24.272344  602969 system_pods.go:74] duration metric: took 11.260676188s to wait for pod list to return data ...
	I1213 19:21:24.272356  602969 default_sa.go:34] waiting for default service account to be created ...
	I1213 19:21:24.275135  602969 default_sa.go:45] found service account: "default"
	I1213 19:21:24.275162  602969 default_sa.go:55] duration metric: took 2.799619ms for default service account to be created ...
	I1213 19:21:24.275172  602969 system_pods.go:116] waiting for k8s-apps to be running ...
	I1213 19:21:24.286490  602969 system_pods.go:86] 18 kube-system pods found
	I1213 19:21:24.286530  602969 system_pods.go:89] "coredns-7c65d6cfc9-bt6ls" [23b8e6b9-51eb-4a14-bee8-7eacdb154832] Running
	I1213 19:21:24.286539  602969 system_pods.go:89] "csi-hostpath-attacher-0" [98592c8c-f15c-40c5-831b-2239874143ea] Running
	I1213 19:21:24.286544  602969 system_pods.go:89] "csi-hostpath-resizer-0" [14cdb963-4eb9-4472-8a01-549e09a55047] Running
	I1213 19:21:24.286550  602969 system_pods.go:89] "csi-hostpathplugin-l2fk7" [30df306a-dc88-4eb0-aa19-d35529eda401] Running
	I1213 19:21:24.286555  602969 system_pods.go:89] "etcd-addons-248098" [014814e1-1087-4331-aeb4-7fd59c3165e5] Running
	I1213 19:21:24.286560  602969 system_pods.go:89] "kindnet-n9pvh" [7e6398f0-53e1-4774-bdd6-211a800d8291] Running
	I1213 19:21:24.286565  602969 system_pods.go:89] "kube-apiserver-addons-248098" [a3e569f6-6078-4dc0-a3b2-764a0180614c] Running
	I1213 19:21:24.286570  602969 system_pods.go:89] "kube-controller-manager-addons-248098" [b6473627-2b96-431a-9082-99576908ad11] Running
	I1213 19:21:24.286574  602969 system_pods.go:89] "kube-ingress-dns-minikube" [53321af4-b841-467d-af38-89b82188ff1d] Running
	I1213 19:21:24.286579  602969 system_pods.go:89] "kube-proxy-rcbrb" [fb396ab8-720d-41c3-9d2b-d1b2fb666b0b] Running
	I1213 19:21:24.286583  602969 system_pods.go:89] "kube-scheduler-addons-248098" [ac75ce0f-098a-4f6d-9e98-697f3b89e854] Running
	I1213 19:21:24.286588  602969 system_pods.go:89] "metrics-server-84c5f94fbc-g7jcr" [a41f7493-f390-4111-9ecf-6b9c91d88986] Running
	I1213 19:21:24.286591  602969 system_pods.go:89] "nvidia-device-plugin-daemonset-xsrsn" [bfc935e3-d013-494e-8380-5b4be1f7a0c9] Running
	I1213 19:21:24.286595  602969 system_pods.go:89] "registry-5cc95cd69-5n4c9" [7ec0f719-ff86-4cc0-9868-18a171b8d618] Running
	I1213 19:21:24.286599  602969 system_pods.go:89] "registry-proxy-nvc8d" [c14eabdb-94a1-4ed0-8a97-51210e96f13a] Running
	I1213 19:21:24.286603  602969 system_pods.go:89] "snapshot-controller-56fcc65765-ltsx9" [7191195a-2231-4fe5-9bf3-ba875b3ceeb5] Running
	I1213 19:21:24.286607  602969 system_pods.go:89] "snapshot-controller-56fcc65765-sqhl4" [a11c4e23-9e52-4164-b6f3-f29f74154fab] Running
	I1213 19:21:24.286611  602969 system_pods.go:89] "storage-provisioner" [1d273a3f-36bb-4847-ad88-3544cda8cde5] Running
	I1213 19:21:24.286618  602969 system_pods.go:126] duration metric: took 11.440315ms to wait for k8s-apps to be running ...
	I1213 19:21:24.286645  602969 system_svc.go:44] waiting for kubelet service to be running ....
	I1213 19:21:24.286737  602969 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:21:24.299683  602969 system_svc.go:56] duration metric: took 13.040573ms WaitForService to wait for kubelet
	I1213 19:21:24.299710  602969 kubeadm.go:582] duration metric: took 2m3.069273573s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I1213 19:21:24.299729  602969 node_conditions.go:102] verifying NodePressure condition ...
	I1213 19:21:24.304220  602969 node_conditions.go:122] node storage ephemeral capacity is 203034800Ki
	I1213 19:21:24.304264  602969 node_conditions.go:123] node cpu capacity is 2
	I1213 19:21:24.304277  602969 node_conditions.go:105] duration metric: took 4.542452ms to run NodePressure ...
	I1213 19:21:24.304291  602969 start.go:241] waiting for startup goroutines ...
	I1213 19:21:24.304299  602969 start.go:246] waiting for cluster config update ...
	I1213 19:21:24.304316  602969 start.go:255] writing updated cluster config ...
	I1213 19:21:24.304631  602969 ssh_runner.go:195] Run: rm -f paused
	I1213 19:21:24.730318  602969 start.go:600] kubectl: 1.32.0, cluster: 1.31.2 (minor skew: 1)
	I1213 19:21:24.735680  602969 out.go:177] * Done! kubectl is now configured to use "addons-248098" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Dec 13 19:25:58 addons-248098 crio[989]: time="2024-12-13 19:25:58.087485085Z" level=info msg="Got pod network &{Name:ingress-nginx-controller-5f85ff4588-77ds6 Namespace:ingress-nginx ID:0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389 UID:ae0dd9d5-8d09-4bca-88a3-a2588da8666a NetNS:/var/run/netns/799c354c-37fc-4d04-a75a-7d8b2964f2a1 Networks:[{Name:kindnet Ifname:eth0}] RuntimeConfig:map[kindnet:{IP: MAC: PortMappings:[] Bandwidth:<nil> IpRanges:[]}] Aliases:map[]}"
	Dec 13 19:25:58 addons-248098 crio[989]: time="2024-12-13 19:25:58.087635955Z" level=info msg="Deleting pod ingress-nginx_ingress-nginx-controller-5f85ff4588-77ds6 from CNI network \"kindnet\" (type=ptp)"
	Dec 13 19:25:58 addons-248098 crio[989]: time="2024-12-13 19:25:58.116687416Z" level=info msg="Stopped pod sandbox: 0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389" id=ee4c9c92-8c1c-40a5-9d32-54ae89ed3d79 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:25:58 addons-248098 crio[989]: time="2024-12-13 19:25:58.909383652Z" level=info msg="Removing container: bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d" id=e670a94c-82d1-456e-bb22-34cd317b8541 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:25:58 addons-248098 crio[989]: time="2024-12-13 19:25:58.924241563Z" level=info msg="Removed container bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d: ingress-nginx/ingress-nginx-controller-5f85ff4588-77ds6/controller" id=e670a94c-82d1-456e-bb22-34cd317b8541 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:26:16 addons-248098 crio[989]: time="2024-12-13 19:26:16.961275918Z" level=info msg="Removing container: e10ba2c21305ff7d66fb4f5e12fd2f4f8ea215f12e165136509e7782bfb5b090" id=0980bc5f-1a89-4ca2-a779-451e4a8b5ad2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:26:16 addons-248098 crio[989]: time="2024-12-13 19:26:16.979210594Z" level=info msg="Removed container e10ba2c21305ff7d66fb4f5e12fd2f4f8ea215f12e165136509e7782bfb5b090: ingress-nginx/ingress-nginx-admission-patch-7r99g/patch" id=0980bc5f-1a89-4ca2-a779-451e4a8b5ad2 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:26:16 addons-248098 crio[989]: time="2024-12-13 19:26:16.980873056Z" level=info msg="Removing container: 999049ad75afc629891c5c2de0f9ef58624b0c773b5dd3871ce0760aaf2270ae" id=7a16522e-63c2-440a-9076-655723fb9378 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:26:16 addons-248098 crio[989]: time="2024-12-13 19:26:16.999404900Z" level=info msg="Removed container 999049ad75afc629891c5c2de0f9ef58624b0c773b5dd3871ce0760aaf2270ae: ingress-nginx/ingress-nginx-admission-create-2fpd2/create" id=7a16522e-63c2-440a-9076-655723fb9378 name=/runtime.v1.RuntimeService/RemoveContainer
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.000820852Z" level=info msg="Stopping pod sandbox: 0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389" id=9b4847de-c9e6-456e-b42b-5f0c154100a5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.000867770Z" level=info msg="Stopped pod sandbox (already stopped): 0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389" id=9b4847de-c9e6-456e-b42b-5f0c154100a5 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.001178510Z" level=info msg="Removing pod sandbox: 0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389" id=a879fd96-470b-47fb-b7a3-2ec0aeb40193 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.016838409Z" level=info msg="Removed pod sandbox: 0ec565a7d84e5dd239a65d5c7f93d61be7d08c2151021395b7544aed71e2e389" id=a879fd96-470b-47fb-b7a3-2ec0aeb40193 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.017439524Z" level=info msg="Stopping pod sandbox: 657f4440d1fda6128a07d0d8743b744dbcb8535eda71acc33d2c3bc7f6d5e194" id=9d0fc9b6-f286-4995-8ee4-451583c3bcce name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.017484111Z" level=info msg="Stopped pod sandbox (already stopped): 657f4440d1fda6128a07d0d8743b744dbcb8535eda71acc33d2c3bc7f6d5e194" id=9d0fc9b6-f286-4995-8ee4-451583c3bcce name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.018602698Z" level=info msg="Removing pod sandbox: 657f4440d1fda6128a07d0d8743b744dbcb8535eda71acc33d2c3bc7f6d5e194" id=33770eda-232a-4b4d-871e-18535e922a7b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.027766735Z" level=info msg="Removed pod sandbox: 657f4440d1fda6128a07d0d8743b744dbcb8535eda71acc33d2c3bc7f6d5e194" id=33770eda-232a-4b4d-871e-18535e922a7b name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.028440557Z" level=info msg="Stopping pod sandbox: 0ad06363f4b956329bd95c0635e6b119c269d89124bb8fba548c5159fae8450a" id=788db520-9d3f-4e8f-9683-247be286d832 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.028481567Z" level=info msg="Stopped pod sandbox (already stopped): 0ad06363f4b956329bd95c0635e6b119c269d89124bb8fba548c5159fae8450a" id=788db520-9d3f-4e8f-9683-247be286d832 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.028807766Z" level=info msg="Removing pod sandbox: 0ad06363f4b956329bd95c0635e6b119c269d89124bb8fba548c5159fae8450a" id=0a148c1f-576e-4b1d-aec4-29947cd3eb01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.037952341Z" level=info msg="Removed pod sandbox: 0ad06363f4b956329bd95c0635e6b119c269d89124bb8fba548c5159fae8450a" id=0a148c1f-576e-4b1d-aec4-29947cd3eb01 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.038510682Z" level=info msg="Stopping pod sandbox: 13679a5079d4887987b11e15c56ac222308cf109f2e4d151c2432e13f56c3783" id=6c93992e-1b6b-4f9d-90f1-13e5a62d28c2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.038565370Z" level=info msg="Stopped pod sandbox (already stopped): 13679a5079d4887987b11e15c56ac222308cf109f2e4d151c2432e13f56c3783" id=6c93992e-1b6b-4f9d-90f1-13e5a62d28c2 name=/runtime.v1.RuntimeService/StopPodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.038971652Z" level=info msg="Removing pod sandbox: 13679a5079d4887987b11e15c56ac222308cf109f2e4d151c2432e13f56c3783" id=90a4b020-5497-4aa7-9a60-63ba36efcaa0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	Dec 13 19:26:17 addons-248098 crio[989]: time="2024-12-13 19:26:17.047910908Z" level=info msg="Removed pod sandbox: 13679a5079d4887987b11e15c56ac222308cf109f2e4d151c2432e13f56c3783" id=90a4b020-5497-4aa7-9a60-63ba36efcaa0 name=/runtime.v1.RuntimeService/RemovePodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                   CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9d9515c6509ec       docker.io/kicbase/echo-server@sha256:127ac38a2bb9537b7f252addff209ea6801edcac8a92c8b1104dacd66a583ed6                   About a minute ago   Running             hello-world-app           0                   0fc8cd343fdb5       hello-world-app-55bf9c44b4-z9wlr
	b7d7a44eec17b       docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4                         3 minutes ago        Running             nginx                     0                   168334f3be3f6       nginx
	0ee8eaa9b3f42       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e                     5 minutes ago        Running             busybox                   0                   0ae8365e4a516       busybox
	1503028b745d0       docker.io/rancher/local-path-provisioner@sha256:689a2489a24e74426e4a4666e611c988202c5fa995908b0c60133aca3eb87d98        7 minutes ago        Running             local-path-provisioner    0                   9063ee2cd8175       local-path-provisioner-86d989889c-rgd6q
	eb0c779bf9b1d       registry.k8s.io/metrics-server/metrics-server@sha256:048bcf48fc2cce517a61777e22bac782ba59ea5e9b9a54bcb42dbee99566a91f   7 minutes ago        Running             metrics-server            0                   25e7603213900       metrics-server-84c5f94fbc-g7jcr
	d5719b1b478de       2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4                                                        7 minutes ago        Running             coredns                   0                   5c0b264fe641c       coredns-7c65d6cfc9-bt6ls
	0c0704d382a69       ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6                                                        7 minutes ago        Running             storage-provisioner       0                   d7807360953a9       storage-provisioner
	da25e26a83aad       docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e                      7 minutes ago        Running             kindnet-cni               0                   96f405480c5da       kindnet-n9pvh
	1449f483df90f       021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba                                                        7 minutes ago        Running             kube-proxy                0                   9de7aa20493ea       kube-proxy-rcbrb
	27ee00545a23c       f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270                                                        8 minutes ago        Running             kube-apiserver            0                   7082116ed71bc       kube-apiserver-addons-248098
	289abb226f700       27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da                                                        8 minutes ago        Running             etcd                      0                   7412a2a5bc972       etcd-addons-248098
	833e3ba74cac9       d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a                                                        8 minutes ago        Running             kube-scheduler            0                   249b5349b7b11       kube-scheduler-addons-248098
	4283a1804a94c       9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba                                                        8 minutes ago        Running             kube-controller-manager   0                   680c82ba028a7       kube-controller-manager-addons-248098
	
	
	==> coredns [d5719b1b478dec4dc55817883d4f9577dc475cde036d3e161334d371c1e81411] <==
	[INFO] 10.244.0.20:56947 - 28373 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000042396s
	[INFO] 10.244.0.20:56947 - 10629 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000044038s
	[INFO] 10.244.0.20:39471 - 12049 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001879942s
	[INFO] 10.244.0.20:39471 - 48081 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000144692s
	[INFO] 10.244.0.20:56947 - 990 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001472847s
	[INFO] 10.244.0.20:56947 - 28928 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001009999s
	[INFO] 10.244.0.20:56947 - 821 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000051003s
	[INFO] 10.244.0.20:40156 - 54931 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000137537s
	[INFO] 10.244.0.20:39932 - 48071 "A IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000074971s
	[INFO] 10.244.0.20:39932 - 38428 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000070089s
	[INFO] 10.244.0.20:40156 - 65286 "AAAA IN hello-world-app.default.svc.cluster.local.ingress-nginx.svc.cluster.local. udp 91 false 512" NXDOMAIN qr,aa,rd 184 0.000290375s
	[INFO] 10.244.0.20:39932 - 22650 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00013359s
	[INFO] 10.244.0.20:40156 - 15885 "A IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000080165s
	[INFO] 10.244.0.20:40156 - 34317 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.000061761s
	[INFO] 10.244.0.20:40156 - 26464 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000061277s
	[INFO] 10.244.0.20:40156 - 39085 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000047566s
	[INFO] 10.244.0.20:39932 - 4405 "AAAA IN hello-world-app.default.svc.cluster.local.svc.cluster.local. udp 77 false 512" NXDOMAIN qr,aa,rd 170 0.00031115s
	[INFO] 10.244.0.20:40156 - 40296 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001373195s
	[INFO] 10.244.0.20:39932 - 22890 "A IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000094163s
	[INFO] 10.244.0.20:40156 - 35584 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.0011444s
	[INFO] 10.244.0.20:40156 - 34664 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000056854s
	[INFO] 10.244.0.20:39932 - 48511 "AAAA IN hello-world-app.default.svc.cluster.local.cluster.local. udp 73 false 512" NXDOMAIN qr,aa,rd 166 0.000046409s
	[INFO] 10.244.0.20:39932 - 64665 "A IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001346446s
	[INFO] 10.244.0.20:39932 - 56824 "AAAA IN hello-world-app.default.svc.cluster.local.us-east-2.compute.internal. udp 86 false 512" NXDOMAIN qr,rd,ra 86 0.001227889s
	[INFO] 10.244.0.20:39932 - 28941 "A IN hello-world-app.default.svc.cluster.local. udp 59 false 512" NOERROR qr,aa,rd 116 0.000062064s
	
	
	==> describe nodes <==
	Name:               addons-248098
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=arm64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=arm64
	                    kubernetes.io/hostname=addons-248098
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=68ea3eca706f73191794a96e3518c1d004192956
	                    minikube.k8s.io/name=addons-248098
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_12_13T19_19_17_0700
	                    minikube.k8s.io/version=v1.34.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-248098
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 13 Dec 2024 19:19:13 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-248098
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 13 Dec 2024 19:27:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 13 Dec 2024 19:26:25 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 13 Dec 2024 19:26:25 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 13 Dec 2024 19:26:25 +0000   Fri, 13 Dec 2024 19:19:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 13 Dec 2024 19:26:25 +0000   Fri, 13 Dec 2024 19:19:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.49.2
	  Hostname:    addons-248098
	Capacity:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  203034800Ki
	  hugepages-1Gi:      0
	  hugepages-2Mi:      0
	  hugepages-32Mi:     0
	  hugepages-64Ki:     0
	  memory:             8022296Ki
	  pods:               110
	System Info:
	  Machine ID:                 0af0368374054463b8b1bd628ee8eb22
	  System UUID:                dce25a95-cc3d-451b-b59c-5c92da6108a0
	  Boot ID:                    8bc558cc-8777-4865-b401-e730957079d4
	  Kernel Version:             5.15.0-1072-aws
	  OS Image:                   Ubuntu 22.04.5 LTS
	  Operating System:           linux
	  Architecture:               arm64
	  Container Runtime Version:  cri-o://1.24.6
	  Kubelet Version:            v1.31.2
	  Kube-Proxy Version:         v1.31.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (13 in total)
	  Namespace                   Name                                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         5m58s
	  default                     hello-world-app-55bf9c44b4-z9wlr           0 (0%)        0 (0%)      0 (0%)           0 (0%)         94s
	  default                     nginx                                      0 (0%)        0 (0%)      0 (0%)           0 (0%)         3m53s
	  kube-system                 coredns-7c65d6cfc9-bt6ls                   100m (5%)     0 (0%)      70Mi (0%)        170Mi (2%)     8m
	  kube-system                 etcd-addons-248098                         100m (5%)     0 (0%)      100Mi (1%)       0 (0%)         8m7s
	  kube-system                 kindnet-n9pvh                              100m (5%)     100m (5%)   50Mi (0%)        50Mi (0%)      8m1s
	  kube-system                 kube-apiserver-addons-248098               250m (12%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-controller-manager-addons-248098      200m (10%)    0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 kube-proxy-rcbrb                           0 (0%)        0 (0%)      0 (0%)           0 (0%)         8m
	  kube-system                 kube-scheduler-addons-248098               100m (5%)     0 (0%)      0 (0%)           0 (0%)         8m7s
	  kube-system                 metrics-server-84c5f94fbc-g7jcr            100m (5%)     0 (0%)      200Mi (2%)       0 (0%)         7m57s
	  kube-system                 storage-provisioner                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	  local-path-storage          local-path-provisioner-86d989889c-rgd6q    0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                950m (47%)  100m (5%)
	  memory             420Mi (5%)  220Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-1Gi      0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	  hugepages-32Mi     0 (0%)      0 (0%)
	  hugepages-64Ki     0 (0%)      0 (0%)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 7m54s                kube-proxy       
	  Normal   Starting                 8m7s                 kubelet          Starting kubelet.
	  Warning  CgroupV1                 8m7s                 kubelet          Cgroup v1 support is in maintenance mode, please migrate to Cgroup v2.
	  Normal   NodeHasSufficientMemory  8m7s (x2 over 8m7s)  kubelet          Node addons-248098 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m7s (x2 over 8m7s)  kubelet          Node addons-248098 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m7s (x2 over 8m7s)  kubelet          Node addons-248098 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           8m3s                 node-controller  Node addons-248098 event: Registered Node addons-248098 in Controller
	  Normal   NodeReady                7m44s                kubelet          Node addons-248098 status is now: NodeReady
	
	
	==> dmesg <==
	
	
	==> etcd [289abb226f700e5f3c20a349d1479217ce3aa4bc311be8aa6d6f12374b0cb68a] <==
	{"level":"info","ts":"2024-12-13T19:19:10.590717Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2024-12-13T19:19:10.591625Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.49.2:2379"}
	{"level":"info","ts":"2024-12-13T19:19:21.946468Z","caller":"traceutil/trace.go:171","msg":"trace[936921410] transaction","detail":"{read_only:false; response_revision:316; number_of_response:1; }","duration":"127.534977ms","start":"2024-12-13T19:19:21.818902Z","end":"2024-12-13T19:19:21.946437Z","steps":["trace[936921410] 'process raft request'  (duration: 43.075222ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.355014Z","caller":"traceutil/trace.go:171","msg":"trace[1311423016] transaction","detail":"{read_only:false; response_revision:318; number_of_response:1; }","duration":"138.771034ms","start":"2024-12-13T19:19:22.216225Z","end":"2024-12-13T19:19:22.354996Z","steps":["trace[1311423016] 'process raft request'  (duration: 138.610383ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.401520Z","caller":"traceutil/trace.go:171","msg":"trace[1286345005] transaction","detail":"{read_only:false; response_revision:319; number_of_response:1; }","duration":"185.164389ms","start":"2024-12-13T19:19:22.216333Z","end":"2024-12-13T19:19:22.401497Z","steps":["trace[1286345005] 'process raft request'  (duration: 138.615208ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.417210Z","caller":"traceutil/trace.go:171","msg":"trace[822680764] linearizableReadLoop","detail":"{readStateIndex:326; appliedIndex:325; }","duration":"200.919964ms","start":"2024-12-13T19:19:22.216276Z","end":"2024-12-13T19:19:22.417196Z","steps":["trace[822680764] 'read index received'  (duration: 103.548881ms)","trace[822680764] 'applied index is now lower than readState.Index'  (duration: 97.370534ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:19:22.417327Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"201.029357ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/replicaset-controller\" ","response":"range_response_count:1 size:207"}
	{"level":"info","ts":"2024-12-13T19:19:22.502403Z","caller":"traceutil/trace.go:171","msg":"trace[322309030] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/replicaset-controller; range_end:; response_count:1; response_revision:320; }","duration":"286.109173ms","start":"2024-12-13T19:19:22.216272Z","end":"2024-12-13T19:19:22.502382Z","steps":["trace[322309030] 'agreement among raft nodes before linearized reading'  (duration: 200.970115ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.417422Z","caller":"traceutil/trace.go:171","msg":"trace[1489321000] transaction","detail":"{read_only:false; response_revision:320; number_of_response:1; }","duration":"162.451632ms","start":"2024-12-13T19:19:22.254964Z","end":"2024-12-13T19:19:22.417415Z","steps":["trace[1489321000] 'process raft request'  (duration: 162.137149ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:19:22.571107Z","caller":"traceutil/trace.go:171","msg":"trace[2079234850] transaction","detail":"{read_only:false; response_revision:321; number_of_response:1; }","duration":"250.729686ms","start":"2024-12-13T19:19:22.320359Z","end":"2024-12-13T19:19:22.571089Z","steps":["trace[2079234850] 'process raft request'  (duration: 209.514181ms)","trace[2079234850] 'compare'  (duration: 37.338718ms)"],"step_count":2}
	{"level":"info","ts":"2024-12-13T19:19:22.579490Z","caller":"traceutil/trace.go:171","msg":"trace[1110984546] transaction","detail":"{read_only:false; response_revision:322; number_of_response:1; }","duration":"258.994085ms","start":"2024-12-13T19:19:22.320480Z","end":"2024-12-13T19:19:22.579474Z","steps":["trace[1110984546] 'process raft request'  (duration: 250.342153ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:19:22.628459Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"308.003647ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/configmaps/kube-system/coredns\" ","response":"range_response_count:1 size:612"}
	{"level":"info","ts":"2024-12-13T19:19:22.649678Z","caller":"traceutil/trace.go:171","msg":"trace[627535555] range","detail":"{range_begin:/registry/configmaps/kube-system/coredns; range_end:; response_count:1; response_revision:322; }","duration":"329.22598ms","start":"2024-12-13T19:19:22.320426Z","end":"2024-12-13T19:19:22.649652Z","steps":["trace[627535555] 'agreement among raft nodes before linearized reading'  (duration: 279.7766ms)","trace[627535555] 'range keys from bolt db'  (duration: 25.603758ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:19:22.649954Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:19:22.320406Z","time spent":"329.514526ms","remote":"127.0.0.1:37482","response type":"/etcdserverpb.KV/Range","request count":0,"request size":42,"response count":1,"response size":636,"request content":"key:\"/registry/configmaps/kube-system/coredns\" "}
	{"level":"warn","ts":"2024-12-13T19:19:22.650627Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"330.164536ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces\" limit:1 ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-12-13T19:19:22.656078Z","caller":"traceutil/trace.go:171","msg":"trace[92943196] range","detail":"{range_begin:/registry/namespaces; range_end:; response_count:0; response_revision:322; }","duration":"335.61055ms","start":"2024-12-13T19:19:22.320447Z","end":"2024-12-13T19:19:22.656058Z","steps":["trace[92943196] 'agreement among raft nodes before linearized reading'  (duration: 330.139543ms)"],"step_count":1}
	{"level":"warn","ts":"2024-12-13T19:19:22.656260Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-12-13T19:19:22.320436Z","time spent":"335.796201ms","remote":"127.0.0.1:37498","response type":"/etcdserverpb.KV/Range","request count":0,"request size":24,"response count":0,"response size":29,"request content":"key:\"/registry/namespaces\" limit:1 "}
	{"level":"info","ts":"2024-12-13T19:19:22.715634Z","caller":"traceutil/trace.go:171","msg":"trace[529049175] transaction","detail":"{read_only:false; response_revision:323; number_of_response:1; }","duration":"154.336798ms","start":"2024-12-13T19:19:22.561281Z","end":"2024-12-13T19:19:22.715618Z","steps":["trace[529049175] 'process raft request'  (duration: 154.083543ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571373Z","caller":"traceutil/trace.go:171","msg":"trace[806318250] transaction","detail":"{read_only:false; response_revision:1226; number_of_response:1; }","duration":"106.250175ms","start":"2024-12-13T19:20:55.465107Z","end":"2024-12-13T19:20:55.571357Z","steps":["trace[806318250] 'process raft request'  (duration: 106.12251ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571507Z","caller":"traceutil/trace.go:171","msg":"trace[1413756354] transaction","detail":"{read_only:false; response_revision:1227; number_of_response:1; }","duration":"106.292761ms","start":"2024-12-13T19:20:55.465208Z","end":"2024-12-13T19:20:55.571501Z","steps":["trace[1413756354] 'process raft request'  (duration: 106.0575ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571631Z","caller":"traceutil/trace.go:171","msg":"trace[984580659] transaction","detail":"{read_only:false; response_revision:1228; number_of_response:1; }","duration":"105.507881ms","start":"2024-12-13T19:20:55.466116Z","end":"2024-12-13T19:20:55.571624Z","steps":["trace[984580659] 'process raft request'  (duration: 105.172968ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571726Z","caller":"traceutil/trace.go:171","msg":"trace[1743161593] linearizableReadLoop","detail":"{readStateIndex:1262; appliedIndex:1257; }","duration":"102.474618ms","start":"2024-12-13T19:20:55.469245Z","end":"2024-12-13T19:20:55.571719Z","steps":["trace[1743161593] 'read index received'  (duration: 15.995395ms)","trace[1743161593] 'applied index is now lower than readState.Index'  (duration: 86.478632ms)"],"step_count":2}
	{"level":"warn","ts":"2024-12-13T19:20:55.572400Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"103.138971ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/csi-hostpathplugin-l2fk7\" ","response":"range_response_count:1 size:12993"}
	{"level":"info","ts":"2024-12-13T19:20:55.572439Z","caller":"traceutil/trace.go:171","msg":"trace[1715638475] range","detail":"{range_begin:/registry/pods/kube-system/csi-hostpathplugin-l2fk7; range_end:; response_count:1; response_revision:1229; }","duration":"103.18936ms","start":"2024-12-13T19:20:55.469241Z","end":"2024-12-13T19:20:55.572430Z","steps":["trace[1715638475] 'agreement among raft nodes before linearized reading'  (duration: 102.574073ms)"],"step_count":1}
	{"level":"info","ts":"2024-12-13T19:20:55.571644Z","caller":"traceutil/trace.go:171","msg":"trace[1022539298] transaction","detail":"{read_only:false; response_revision:1225; number_of_response:1; }","duration":"106.60575ms","start":"2024-12-13T19:20:55.465021Z","end":"2024-12-13T19:20:55.571627Z","steps":["trace[1022539298] 'process raft request'  (duration: 52.365558ms)","trace[1022539298] 'compare'  (duration: 53.748928ms)"],"step_count":2}
	
	
	==> kernel <==
	 19:27:23 up  3:09,  0 users,  load average: 0.08, 1.10, 2.22
	Linux addons-248098 5.15.0-1072-aws #78~20.04.1-Ubuntu SMP Wed Oct 9 15:29:54 UTC 2024 aarch64 aarch64 aarch64 GNU/Linux
	PRETTY_NAME="Ubuntu 22.04.5 LTS"
	
	
	==> kindnet [da25e26a83aad96484fe1eecafba3a8b5e62f5486ff06573a87eb752e248b7f3] <==
	I1213 19:25:19.343580       1 main.go:301] handling current node
	I1213 19:25:29.335421       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:29.335459       1 main.go:301] handling current node
	I1213 19:25:39.340798       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:39.340837       1 main.go:301] handling current node
	I1213 19:25:49.335813       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:49.335848       1 main.go:301] handling current node
	I1213 19:25:59.338956       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:25:59.338990       1 main.go:301] handling current node
	I1213 19:26:09.339228       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:09.339538       1 main.go:301] handling current node
	I1213 19:26:19.335238       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:19.335276       1 main.go:301] handling current node
	I1213 19:26:29.335213       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:29.335336       1 main.go:301] handling current node
	I1213 19:26:39.335641       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:39.335778       1 main.go:301] handling current node
	I1213 19:26:49.335803       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:49.335847       1 main.go:301] handling current node
	I1213 19:26:59.343669       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:26:59.343712       1 main.go:301] handling current node
	I1213 19:27:09.336128       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:27:09.336165       1 main.go:301] handling current node
	I1213 19:27:19.343967       1 main.go:297] Handling node with IPs: map[192.168.49.2:{}]
	I1213 19:27:19.344001       1 main.go:301] handling current node
	
	
	==> kube-apiserver [27ee00545a23ca3d022b68468445151371316d97acbaf2235b93791b944d3e2d] <==
	 > logger="UnhandledError"
	E1213 19:20:49.877444       1 remote_available_controller.go:448] "Unhandled Error" err="v1beta1.metrics.k8s.io failed with: failing or missing response from https://10.97.72.3:443/apis/metrics.k8s.io/v1beta1: Get \"https://10.97.72.3:443/apis/metrics.k8s.io/v1beta1\": dial tcp 10.97.72.3:443: connect: connection refused" logger="UnhandledError"
	I1213 19:20:50.167714       1 handler.go:286] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	E1213 19:21:34.791748       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36812: use of closed network connection
	E1213 19:21:35.211372       1 conn.go:339] Error on socket receive: read tcp 192.168.49.2:8443->192.168.49.1:36854: use of closed network connection
	I1213 19:21:44.609678       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.104.201.169"}
	I1213 19:22:47.019728       1 controller.go:615] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I1213 19:23:10.541433       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.547768       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.579260       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.579633       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.594662       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.594705       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.604171       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.604219       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I1213 19:23:10.822739       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I1213 19:23:10.822775       1 handler.go:286] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W1213 19:23:11.599147       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W1213 19:23:11.823336       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	W1213 19:23:11.835647       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	I1213 19:23:24.385887       1 handler.go:286] Adding GroupVersion gadget.kinvolk.io v1alpha1 to ResourceManager
	W1213 19:23:25.508597       1 cacher.go:171] Terminating all watchers from cacher traces.gadget.kinvolk.io
	I1213 19:23:29.989652       1 controller.go:615] quota admission added evaluator for: ingresses.networking.k8s.io
	I1213 19:23:30.374495       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.97.136.243"}
	I1213 19:25:50.130689       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.109.68.160"}
	
	
	==> kube-controller-manager [4283a1804a94cc88954082e4f508a1cbf5f868d2c0662a9d3f0e826e9b6c5f1a] <==
	I1213 19:25:51.906869       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="12.483711ms"
	I1213 19:25:51.906943       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/hello-world-app-55bf9c44b4" duration="34.914µs"
	I1213 19:25:54.899578       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-create" delay="0s"
	I1213 19:25:54.905677       1 job_controller.go:568] "enqueueing job" logger="job-controller" key="ingress-nginx/ingress-nginx-admission-patch" delay="0s"
	I1213 19:25:54.908839       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-5f85ff4588" duration="11.775µs"
	W1213 19:25:57.455454       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:25:57.455496       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:26:05.431155       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:05.431198       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:26:05.444656       1 namespace_controller.go:187] "Namespace has been deleted" logger="namespace-controller" namespace="ingress-nginx"
	W1213 19:26:08.511216       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:08.511262       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	I1213 19:26:25.677683       1 range_allocator.go:241] "Successfully synced" logger="node-ipam-controller" key="addons-248098"
	W1213 19:26:30.154298       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:30.154348       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:26:40.054988       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:40.055125       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:26:41.803781       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:41.803825       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:26:53.468345       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:26:53.468393       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:27:16.935509       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:27:16.935549       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	W1213 19:27:20.034554       1 reflector.go:561] k8s.io/client-go/metadata/metadatainformer/informer.go:138: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E1213 19:27:20.034602       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/metadata/metadatainformer/informer.go:138: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource" logger="UnhandledError"
	
	
	==> kube-proxy [1449f483df90f5a531d350913e0aa7cdf914d18ed8dca152d252402554d37102] <==
	I1213 19:19:26.856575       1 server_linux.go:66] "Using iptables proxy"
	I1213 19:19:27.434663       1 server.go:677] "Successfully retrieved node IP(s)" IPs=["192.168.49.2"]
	E1213 19:19:27.434728       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I1213 19:19:28.459633       1 server.go:243] "kube-proxy running in dual-stack mode" primary ipFamily="IPv4"
	I1213 19:19:28.459776       1 server_linux.go:169] "Using iptables Proxier"
	I1213 19:19:28.462676       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I1213 19:19:28.463711       1 server.go:483] "Version info" version="v1.31.2"
	I1213 19:19:28.463786       1 server.go:485] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I1213 19:19:28.553230       1 config.go:199] "Starting service config controller"
	I1213 19:19:28.553341       1 shared_informer.go:313] Waiting for caches to sync for service config
	I1213 19:19:28.553931       1 config.go:105] "Starting endpoint slice config controller"
	I1213 19:19:28.575024       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I1213 19:19:28.554133       1 config.go:328] "Starting node config controller"
	I1213 19:19:28.575148       1 shared_informer.go:313] Waiting for caches to sync for node config
	I1213 19:19:28.710663       1 shared_informer.go:320] Caches are synced for service config
	I1213 19:19:28.710979       1 shared_informer.go:320] Caches are synced for node config
	I1213 19:19:28.711013       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [833e3ba74cac9f8b814ea06aeafe171150043188b329c9d621a03c75dbc4578f] <==
	W1213 19:19:13.992373       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E1213 19:19:13.994022       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:13.992722       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:19:13.994124       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.810363       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E1213 19:19:14.811859       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.830724       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E1213 19:19:14.830897       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.876077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E1213 19:19:14.876126       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.882112       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:14.882231       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.988961       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E1213 19:19:14.989138       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:14.991077       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E1213 19:19:14.991192       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.093515       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:15.093566       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.127159       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E1213 19:19:15.127305       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.207229       1 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E1213 19:19:15.207275       1 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W1213 19:19:15.258698       1 reflector.go:561] runtime/asm_arm64.s:1222: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E1213 19:19:15.258955       1 reflector.go:158] "Unhandled Error" err="runtime/asm_arm64.s:1222: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	I1213 19:19:17.658668       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Dec 13 19:25:58 addons-248098 kubelet[1527]: I1213 19:25:58.357914    1527 reconciler_common.go:288] "Volume detached for volume \"kube-api-access-f8p9w\" (UniqueName: \"kubernetes.io/projected/ae0dd9d5-8d09-4bca-88a3-a2588da8666a-kube-api-access-f8p9w\") on node \"addons-248098\" DevicePath \"\""
	Dec 13 19:25:58 addons-248098 kubelet[1527]: I1213 19:25:58.496657    1527 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ae0dd9d5-8d09-4bca-88a3-a2588da8666a" path="/var/lib/kubelet/pods/ae0dd9d5-8d09-4bca-88a3-a2588da8666a/volumes"
	Dec 13 19:25:58 addons-248098 kubelet[1527]: I1213 19:25:58.907744    1527 scope.go:117] "RemoveContainer" containerID="bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d"
	Dec 13 19:25:58 addons-248098 kubelet[1527]: I1213 19:25:58.924488    1527 scope.go:117] "RemoveContainer" containerID="bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d"
	Dec 13 19:25:58 addons-248098 kubelet[1527]: E1213 19:25:58.924879    1527 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d\": container with ID starting with bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d not found: ID does not exist" containerID="bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d"
	Dec 13 19:25:58 addons-248098 kubelet[1527]: I1213 19:25:58.924915    1527 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d"} err="failed to get container status \"bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d\": rpc error: code = NotFound desc = could not find container \"bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d\": container with ID starting with bda0caff014eeffa81a8a9e5824ea2b326ebac0d39e3d0cf4adaf18614f9cf6d not found: ID does not exist"
	Dec 13 19:26:06 addons-248098 kubelet[1527]: E1213 19:26:06.755328    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117966755072326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:06 addons-248098 kubelet[1527]: E1213 19:26:06.755367    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117966755072326,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:16 addons-248098 kubelet[1527]: E1213 19:26:16.759613    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117976758913485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:16 addons-248098 kubelet[1527]: E1213 19:26:16.759660    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117976758913485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:16 addons-248098 kubelet[1527]: I1213 19:26:16.960027    1527 scope.go:117] "RemoveContainer" containerID="e10ba2c21305ff7d66fb4f5e12fd2f4f8ea215f12e165136509e7782bfb5b090"
	Dec 13 19:26:16 addons-248098 kubelet[1527]: I1213 19:26:16.979649    1527 scope.go:117] "RemoveContainer" containerID="999049ad75afc629891c5c2de0f9ef58624b0c773b5dd3871ce0760aaf2270ae"
	Dec 13 19:26:26 addons-248098 kubelet[1527]: E1213 19:26:26.762059    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117986761797446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:26 addons-248098 kubelet[1527]: E1213 19:26:26.762098    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117986761797446,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:28 addons-248098 kubelet[1527]: I1213 19:26:28.495230    1527 kubelet_pods.go:1007] "Unable to retrieve pull secret, the image pull may not succeed." pod="default/busybox" secret="" err="secret \"gcp-auth\" not found"
	Dec 13 19:26:36 addons-248098 kubelet[1527]: E1213 19:26:36.764512    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117996764234062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:36 addons-248098 kubelet[1527]: E1213 19:26:36.764565    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734117996764234062,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:46 addons-248098 kubelet[1527]: E1213 19:26:46.767732    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118006767434312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:46 addons-248098 kubelet[1527]: E1213 19:26:46.767789    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118006767434312,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:56 addons-248098 kubelet[1527]: E1213 19:26:56.770767    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118016770468775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:26:56 addons-248098 kubelet[1527]: E1213 19:26:56.770818    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118016770468775,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:27:06 addons-248098 kubelet[1527]: E1213 19:27:06.773191    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118026772938687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:27:06 addons-248098 kubelet[1527]: E1213 19:27:06.773232    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118026772938687,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:27:16 addons-248098 kubelet[1527]: E1213 19:27:16.775904    1527 eviction_manager.go:257] "Eviction manager: failed to get HasDedicatedImageFs" err="missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118036775633879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	Dec 13 19:27:16 addons-248098 kubelet[1527]: E1213 19:27:16.775943    1527 eviction_manager.go:212] "Eviction manager: failed to synchronize" err="eviction manager: failed to get HasDedicatedImageFs: missing image stats: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1734118036775633879,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:615298,},InodesUsed:&UInt64Value{Value:237,},},},ContainerFilesystems:[]*FilesystemUsage{},}"
	
	
	==> storage-provisioner [0c0704d382a69b93cc22a51e1e8cf786c5e6bb3b37718a2ca963a7aa91566d92] <==
	I1213 19:19:40.443989       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I1213 19:19:40.470077       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I1213 19:19:40.470130       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I1213 19:19:40.503801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I1213 19:19:40.507064       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-248098_ef556317-08dd-4573-8f53-d898928781c1!
	I1213 19:19:40.511401       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"cbcfe82e-6948-4068-b720-61c573d1f4fc", APIVersion:"v1", ResourceVersion:"893", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-248098_ef556317-08dd-4573-8f53-d898928781c1 became leader
	I1213 19:19:40.607521       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-248098_ef556317-08dd-4573-8f53-d898928781c1!
	E1213 19:23:09.647789       1 controller.go:1050] claim "1495a858-fb44-41da-96f5-75a367db6d66" in work queue no longer exists
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p addons-248098 -n addons-248098
helpers_test.go:261: (dbg) Run:  kubectl --context addons-248098 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/MetricsServer FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable metrics-server --alsologtostderr -v=1
--- FAIL: TestAddons/parallel/MetricsServer (305.61s)

                                                
                                    

Test pass (297/330)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 10.46
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.19
9 TestDownloadOnly/v1.20.0/DeleteAll 0.36
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.26
12 TestDownloadOnly/v1.31.2/json-events 8.75
13 TestDownloadOnly/v1.31.2/preload-exists 0
17 TestDownloadOnly/v1.31.2/LogsDuration 0.09
18 TestDownloadOnly/v1.31.2/DeleteAll 0.23
19 TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds 0.14
21 TestBinaryMirror 0.59
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.08
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.08
27 TestAddons/Setup 177.75
31 TestAddons/serial/GCPAuth/Namespaces 0.21
32 TestAddons/serial/GCPAuth/FakeCredentials 9.91
35 TestAddons/parallel/Registry 17.87
37 TestAddons/parallel/InspektorGadget 11.79
40 TestAddons/parallel/CSI 58.42
41 TestAddons/parallel/Headlamp 18.28
42 TestAddons/parallel/CloudSpanner 5.76
43 TestAddons/parallel/LocalPath 10.74
44 TestAddons/parallel/NvidiaDevicePlugin 6.7
45 TestAddons/parallel/Yakd 11.97
47 TestAddons/StoppedEnableDisable 12.19
48 TestCertOptions 40.46
49 TestCertExpiration 255.64
51 TestForceSystemdFlag 38.61
52 TestForceSystemdEnv 43.5
58 TestErrorSpam/setup 34.35
59 TestErrorSpam/start 0.8
60 TestErrorSpam/status 1.08
61 TestErrorSpam/pause 1.86
62 TestErrorSpam/unpause 1.92
63 TestErrorSpam/stop 1.51
66 TestFunctional/serial/CopySyncFile 0
67 TestFunctional/serial/StartWithProxy 53.18
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 25.77
70 TestFunctional/serial/KubeContext 0.06
71 TestFunctional/serial/KubectlGetPods 0.09
74 TestFunctional/serial/CacheCmd/cache/add_remote 4.53
75 TestFunctional/serial/CacheCmd/cache/add_local 1.45
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.07
77 TestFunctional/serial/CacheCmd/cache/list 0.07
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.31
79 TestFunctional/serial/CacheCmd/cache/cache_reload 2.3
80 TestFunctional/serial/CacheCmd/cache/delete 0.13
81 TestFunctional/serial/MinikubeKubectlCmd 0.14
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.17
83 TestFunctional/serial/ExtraConfig 33.67
84 TestFunctional/serial/ComponentHealth 0.11
85 TestFunctional/serial/LogsCmd 1.84
86 TestFunctional/serial/LogsFileCmd 1.85
87 TestFunctional/serial/InvalidService 4.37
89 TestFunctional/parallel/ConfigCmd 0.55
90 TestFunctional/parallel/DashboardCmd 15.26
91 TestFunctional/parallel/DryRun 0.55
92 TestFunctional/parallel/InternationalLanguage 0.23
93 TestFunctional/parallel/StatusCmd 1.08
97 TestFunctional/parallel/ServiceCmdConnect 10.6
98 TestFunctional/parallel/AddonsCmd 0.21
99 TestFunctional/parallel/PersistentVolumeClaim 25.06
101 TestFunctional/parallel/SSHCmd 0.72
102 TestFunctional/parallel/CpCmd 2.44
104 TestFunctional/parallel/FileSync 0.51
105 TestFunctional/parallel/CertSync 2.29
109 TestFunctional/parallel/NodeLabels 0.16
111 TestFunctional/parallel/NonActiveRuntimeDisabled 0.75
113 TestFunctional/parallel/License 0.36
115 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.71
116 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0
118 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 10.41
119 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.11
120 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0
124 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.11
125 TestFunctional/parallel/ServiceCmd/DeployApp 6.22
126 TestFunctional/parallel/ProfileCmd/profile_not_create 0.47
127 TestFunctional/parallel/ProfileCmd/profile_list 0.43
128 TestFunctional/parallel/ProfileCmd/profile_json_output 0.43
129 TestFunctional/parallel/MountCmd/any-port 9.31
130 TestFunctional/parallel/ServiceCmd/List 0.65
131 TestFunctional/parallel/ServiceCmd/JSONOutput 0.63
132 TestFunctional/parallel/ServiceCmd/HTTPS 0.43
133 TestFunctional/parallel/ServiceCmd/Format 0.41
134 TestFunctional/parallel/ServiceCmd/URL 0.43
135 TestFunctional/parallel/MountCmd/specific-port 2.24
136 TestFunctional/parallel/MountCmd/VerifyCleanup 2.66
137 TestFunctional/parallel/Version/short 0.06
138 TestFunctional/parallel/Version/components 1.38
139 TestFunctional/parallel/ImageCommands/ImageListShort 0.26
140 TestFunctional/parallel/ImageCommands/ImageListTable 0.24
141 TestFunctional/parallel/ImageCommands/ImageListJson 0.25
142 TestFunctional/parallel/ImageCommands/ImageListYaml 0.3
143 TestFunctional/parallel/ImageCommands/ImageBuild 3.67
144 TestFunctional/parallel/ImageCommands/Setup 0.75
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1.39
146 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.99
147 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.2
148 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.69
149 TestFunctional/parallel/UpdateContextCmd/no_changes 0.24
150 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.16
151 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.16
152 TestFunctional/parallel/ImageCommands/ImageRemove 0.95
153 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.04
154 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.7
155 TestFunctional/delete_echo-server_images 0.04
156 TestFunctional/delete_my-image_image 0.02
157 TestFunctional/delete_minikube_cached_images 0.02
161 TestMultiControlPlane/serial/StartCluster 180.19
162 TestMultiControlPlane/serial/DeployApp 9.84
163 TestMultiControlPlane/serial/PingHostFromPods 1.67
164 TestMultiControlPlane/serial/AddWorkerNode 36.22
165 TestMultiControlPlane/serial/NodeLabels 0.12
166 TestMultiControlPlane/serial/HAppyAfterClusterStart 1.08
167 TestMultiControlPlane/serial/CopyFile 20.71
168 TestMultiControlPlane/serial/StopSecondaryNode 12.84
169 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.84
170 TestMultiControlPlane/serial/RestartSecondaryNode 32.69
171 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 1.36
172 TestMultiControlPlane/serial/RestartClusterKeepsNodes 206.08
173 TestMultiControlPlane/serial/DeleteSecondaryNode 12.88
174 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.78
175 TestMultiControlPlane/serial/StopCluster 35.83
176 TestMultiControlPlane/serial/RestartCluster 108.48
177 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.91
178 TestMultiControlPlane/serial/AddSecondaryNode 75.84
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 1.03
183 TestJSONOutput/start/Command 50.4
184 TestJSONOutput/start/Audit 0
186 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
187 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
189 TestJSONOutput/pause/Command 0.78
190 TestJSONOutput/pause/Audit 0
192 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
193 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
195 TestJSONOutput/unpause/Command 0.68
196 TestJSONOutput/unpause/Audit 0
198 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
199 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
201 TestJSONOutput/stop/Command 5.87
202 TestJSONOutput/stop/Audit 0
204 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
205 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
206 TestErrorJSONOutput 0.24
208 TestKicCustomNetwork/create_custom_network 41.63
209 TestKicCustomNetwork/use_default_bridge_network 36.22
210 TestKicExistingNetwork 34.68
211 TestKicCustomSubnet 36.01
212 TestKicStaticIP 35.49
213 TestMainNoArgs 0.06
214 TestMinikubeProfile 69.82
217 TestMountStart/serial/StartWithMountFirst 9.63
218 TestMountStart/serial/VerifyMountFirst 0.26
219 TestMountStart/serial/StartWithMountSecond 7.13
220 TestMountStart/serial/VerifyMountSecond 0.27
221 TestMountStart/serial/DeleteFirst 1.68
222 TestMountStart/serial/VerifyMountPostDelete 0.28
223 TestMountStart/serial/Stop 1.22
224 TestMountStart/serial/RestartStopped 8.53
225 TestMountStart/serial/VerifyMountPostStop 0.27
228 TestMultiNode/serial/FreshStart2Nodes 80.99
229 TestMultiNode/serial/DeployApp2Nodes 7.1
230 TestMultiNode/serial/PingHostFrom2Pods 1.04
231 TestMultiNode/serial/AddNode 32.86
232 TestMultiNode/serial/MultiNodeLabels 0.1
233 TestMultiNode/serial/ProfileList 0.71
234 TestMultiNode/serial/CopyFile 11.18
235 TestMultiNode/serial/StopNode 2.32
236 TestMultiNode/serial/StartAfterStop 10.15
237 TestMultiNode/serial/RestartKeepsNodes 114.78
238 TestMultiNode/serial/DeleteNode 5.62
239 TestMultiNode/serial/StopMultiNode 23.99
240 TestMultiNode/serial/RestartMultiNode 53.66
241 TestMultiNode/serial/ValidateNameConflict 33.39
246 TestPreload 130.15
248 TestScheduledStopUnix 108.79
251 TestInsufficientStorage 13.75
252 TestRunningBinaryUpgrade 64.82
254 TestKubernetesUpgrade 391.1
255 TestMissingContainerUpgrade 185.94
257 TestNoKubernetes/serial/StartNoK8sWithVersion 0.1
258 TestNoKubernetes/serial/StartWithK8s 38.94
259 TestNoKubernetes/serial/StartWithStopK8s 9.39
260 TestNoKubernetes/serial/Start 9.76
261 TestNoKubernetes/serial/VerifyK8sNotRunning 0.28
262 TestNoKubernetes/serial/ProfileList 1.08
263 TestNoKubernetes/serial/Stop 1.28
264 TestNoKubernetes/serial/StartNoArgs 8.42
265 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.36
266 TestStoppedBinaryUpgrade/Setup 1.66
267 TestStoppedBinaryUpgrade/Upgrade 118.46
268 TestStoppedBinaryUpgrade/MinikubeLogs 1.39
277 TestPause/serial/Start 56.83
278 TestPause/serial/SecondStartNoReconfiguration 29.4
279 TestPause/serial/Pause 1.06
280 TestPause/serial/VerifyStatus 0.48
281 TestPause/serial/Unpause 1.04
282 TestPause/serial/PauseAgain 1.31
283 TestPause/serial/DeletePaused 4.91
284 TestPause/serial/VerifyDeletedResources 0.62
292 TestNetworkPlugins/group/false 4.91
297 TestStartStop/group/old-k8s-version/serial/FirstStart 191.39
299 TestStartStop/group/embed-certs/serial/FirstStart 52.54
300 TestStartStop/group/old-k8s-version/serial/DeployApp 11.76
301 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 1.64
302 TestStartStop/group/old-k8s-version/serial/Stop 12.21
303 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
304 TestStartStop/group/old-k8s-version/serial/SecondStart 140.13
305 TestStartStop/group/embed-certs/serial/DeployApp 10.38
306 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.35
307 TestStartStop/group/embed-certs/serial/Stop 12.15
308 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.3
309 TestStartStop/group/embed-certs/serial/SecondStart 306.86
310 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.04
311 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 6.1
312 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.27
313 TestStartStop/group/old-k8s-version/serial/Pause 3.11
315 TestStartStop/group/no-preload/serial/FirstStart 66.67
316 TestStartStop/group/no-preload/serial/DeployApp 10.34
317 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.19
318 TestStartStop/group/no-preload/serial/Stop 11.96
319 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.37
320 TestStartStop/group/no-preload/serial/SecondStart 300.4
321 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6.01
322 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.11
323 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.26
324 TestStartStop/group/embed-certs/serial/Pause 3.39
326 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 52.92
327 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 14.37
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.22
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 11.97
330 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.22
331 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 282.15
332 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
333 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.1
334 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.31
335 TestStartStop/group/no-preload/serial/Pause 3.71
337 TestStartStop/group/newest-cni/serial/FirstStart 37.9
338 TestStartStop/group/newest-cni/serial/DeployApp 0
339 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.27
340 TestStartStop/group/newest-cni/serial/Stop 1.26
341 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.22
342 TestStartStop/group/newest-cni/serial/SecondStart 17.58
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.32
346 TestStartStop/group/newest-cni/serial/Pause 3.33
347 TestNetworkPlugins/group/auto/Start 48.37
348 TestNetworkPlugins/group/auto/KubeletFlags 0.3
349 TestNetworkPlugins/group/auto/NetCatPod 10.31
350 TestNetworkPlugins/group/auto/DNS 0.2
351 TestNetworkPlugins/group/auto/Localhost 0.15
352 TestNetworkPlugins/group/auto/HairPin 0.16
353 TestNetworkPlugins/group/kindnet/Start 51.94
354 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
355 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.11
356 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.31
357 TestStartStop/group/default-k8s-diff-port/serial/Pause 4.97
358 TestNetworkPlugins/group/calico/Start 67.6
359 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
360 TestNetworkPlugins/group/kindnet/KubeletFlags 0.37
361 TestNetworkPlugins/group/kindnet/NetCatPod 14.33
362 TestNetworkPlugins/group/kindnet/DNS 0.25
363 TestNetworkPlugins/group/kindnet/Localhost 0.2
364 TestNetworkPlugins/group/kindnet/HairPin 0.21
365 TestNetworkPlugins/group/custom-flannel/Start 63.58
366 TestNetworkPlugins/group/calico/ControllerPod 6.02
367 TestNetworkPlugins/group/calico/KubeletFlags 0.37
368 TestNetworkPlugins/group/calico/NetCatPod 14.37
369 TestNetworkPlugins/group/calico/DNS 0.29
370 TestNetworkPlugins/group/calico/Localhost 0.22
371 TestNetworkPlugins/group/calico/HairPin 0.23
372 TestNetworkPlugins/group/enable-default-cni/Start 79.9
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.39
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 12.36
375 TestNetworkPlugins/group/custom-flannel/DNS 0.26
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.2
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.18
378 TestNetworkPlugins/group/flannel/Start 55.47
379 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.48
380 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.51
381 TestNetworkPlugins/group/enable-default-cni/DNS 0.21
382 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
383 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
384 TestNetworkPlugins/group/flannel/ControllerPod 6.01
385 TestNetworkPlugins/group/flannel/KubeletFlags 0.39
386 TestNetworkPlugins/group/flannel/NetCatPod 11.34
387 TestNetworkPlugins/group/bridge/Start 49.51
388 TestNetworkPlugins/group/flannel/DNS 0.21
389 TestNetworkPlugins/group/flannel/Localhost 0.23
390 TestNetworkPlugins/group/flannel/HairPin 0.2
391 TestNetworkPlugins/group/bridge/KubeletFlags 0.31
392 TestNetworkPlugins/group/bridge/NetCatPod 10.26
393 TestNetworkPlugins/group/bridge/DNS 0.5
394 TestNetworkPlugins/group/bridge/Localhost 0.21
395 TestNetworkPlugins/group/bridge/HairPin 0.16
x
+
TestDownloadOnly/v1.20.0/json-events (10.46s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-307056 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-307056 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=docker  --container-runtime=crio: (10.463773627s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (10.46s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I1213 19:18:15.449841  602199 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
I1213 19:18:15.449924  602199 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-307056
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-307056: exit status 85 (191.864302ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-307056 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |          |
	|         | -p download-only-307056        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=docker                |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:18:05
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:18:05.051116  602206 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:18:05.051268  602206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:05.051279  602206 out.go:358] Setting ErrFile to fd 2...
	I1213 19:18:05.051284  602206 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:05.051634  602206 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	W1213 19:18:05.051796  602206 root.go:314] Error reading config file at /home/jenkins/minikube-integration/20090-596807/.minikube/config/config.json: open /home/jenkins/minikube-integration/20090-596807/.minikube/config/config.json: no such file or directory
	I1213 19:18:05.052306  602206 out.go:352] Setting JSON to true
	I1213 19:18:05.053321  602206 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10801,"bootTime":1734106684,"procs":172,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:18:05.053398  602206 start.go:139] virtualization:  
	I1213 19:18:05.057540  602206 out.go:97] [download-only-307056] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	W1213 19:18:05.057759  602206 preload.go:293] Failed to list preload files: open /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball: no such file or directory
	I1213 19:18:05.057862  602206 notify.go:220] Checking for updates...
	I1213 19:18:05.061179  602206 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:18:05.063608  602206 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:18:05.065734  602206 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:18:05.068286  602206 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:18:05.070573  602206 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 19:18:05.075300  602206 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:18:05.075581  602206 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:18:05.100266  602206 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:18:05.100392  602206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:05.162664  602206 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-13 19:18:05.152088378 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:05.162795  602206 docker.go:318] overlay module found
	I1213 19:18:05.165636  602206 out.go:97] Using the docker driver based on user configuration
	I1213 19:18:05.165689  602206 start.go:297] selected driver: docker
	I1213 19:18:05.165698  602206 start.go:901] validating driver "docker" against <nil>
	I1213 19:18:05.165807  602206 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:05.220573  602206 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:29 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-13 19:18:05.210946424 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:05.220870  602206 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:18:05.221328  602206 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1213 19:18:05.221602  602206 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:18:05.224498  602206 out.go:169] Using Docker driver with root privileges
	I1213 19:18:05.226978  602206 cni.go:84] Creating CNI manager for ""
	I1213 19:18:05.227126  602206 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:05.227150  602206 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:18:05.227266  602206 start.go:340] cluster config:
	{Name:download-only-307056 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-307056 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:05.230213  602206 out.go:97] Starting "download-only-307056" primary control-plane node in "download-only-307056" cluster
	I1213 19:18:05.230252  602206 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:18:05.232560  602206 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:18:05.232597  602206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:18:05.232645  602206 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:18:05.249800  602206 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:18:05.250680  602206 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:18:05.250790  602206 image.go:148] Writing gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:18:05.295904  602206 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1213 19:18:05.295929  602206 cache.go:56] Caching tarball of preloaded images
	I1213 19:18:05.296671  602206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:18:05.299390  602206 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I1213 19:18:05.299413  602206 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1213 19:18:05.411288  602206 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4?checksum=md5:59cd2ef07b53f039bfd1761b921f2a02 -> /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4
	I1213 19:18:10.680484  602206 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1213 19:18:10.680595  602206 preload.go:254] verifying checksum of /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-arm64.tar.lz4 ...
	I1213 19:18:11.793360  602206 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I1213 19:18:11.793769  602206 profile.go:143] Saving config to /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/download-only-307056/config.json ...
	I1213 19:18:11.793805  602206 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/download-only-307056/config.json: {Name:mkff990439924c1ec90e9067c9c483fdece1a8e0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I1213 19:18:11.794006  602206 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I1213 19:18:11.794297  602206 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/linux/arm64/kubectl.sha256 -> /home/jenkins/minikube-integration/20090-596807/.minikube/cache/linux/arm64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-307056 host does not exist
	  To start a cluster, run: "minikube start -p download-only-307056"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.19s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.36s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-307056
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.26s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/json-events (8.75s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -o=json --download-only -p download-only-161886 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -o=json --download-only -p download-only-161886 --force --alsologtostderr --kubernetes-version=v1.31.2 --container-runtime=crio --driver=docker  --container-runtime=crio: (8.753963236s)
--- PASS: TestDownloadOnly/v1.31.2/json-events (8.75s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/preload-exists
I1213 19:18:25.011505  602199 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
I1213 19:18:25.011561  602199 preload.go:146] Found local preload: /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
--- PASS: TestDownloadOnly/v1.31.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-arm64 logs -p download-only-161886
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-arm64 logs -p download-only-161886: exit status 85 (86.892115ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-307056 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | -p download-only-307056        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| delete  | -p download-only-307056        | download-only-307056 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC | 13 Dec 24 19:18 UTC |
	| start   | -o=json --download-only        | download-only-161886 | jenkins | v1.34.0 | 13 Dec 24 19:18 UTC |                     |
	|         | -p download-only-161886        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.2   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=docker                |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/12/13 19:18:16
	Running on machine: ip-172-31-31-251
	Binary: Built with gc go1.23.3 for linux/arm64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I1213 19:18:16.305763  602411 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:18:16.305918  602411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:16.305930  602411 out.go:358] Setting ErrFile to fd 2...
	I1213 19:18:16.305936  602411 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:18:16.306232  602411 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:18:16.306697  602411 out.go:352] Setting JSON to true
	I1213 19:18:16.307617  602411 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":10812,"bootTime":1734106684,"procs":170,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:18:16.307698  602411 start.go:139] virtualization:  
	I1213 19:18:16.332997  602411 out.go:97] [download-only-161886] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1213 19:18:16.333202  602411 notify.go:220] Checking for updates...
	I1213 19:18:16.363665  602411 out.go:169] MINIKUBE_LOCATION=20090
	I1213 19:18:16.396035  602411 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:18:16.428211  602411 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:18:16.455219  602411 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:18:16.479521  602411 out.go:169] MINIKUBE_BIN=out/minikube-linux-arm64
	W1213 19:18:16.537224  602411 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I1213 19:18:16.537526  602411 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:18:16.561034  602411 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:18:16.561153  602411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:16.625965  602411 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:16.616810405 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:16.626185  602411 docker.go:318] overlay module found
	I1213 19:18:16.629024  602411 out.go:97] Using the docker driver based on user configuration
	I1213 19:18:16.629055  602411 start.go:297] selected driver: docker
	I1213 19:18:16.629063  602411 start.go:901] validating driver "docker" against <nil>
	I1213 19:18:16.629163  602411 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:18:16.685896  602411 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:1 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:25 OomKillDisable:true NGoroutines:44 SystemTime:2024-12-13 19:18:16.676341656 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:18:16.686139  602411 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I1213 19:18:16.686483  602411 start_flags.go:393] Using suggested 2200MB memory alloc based on sys=7834MB, container=7834MB
	I1213 19:18:16.686650  602411 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I1213 19:18:16.689964  602411 out.go:169] Using Docker driver with root privileges
	I1213 19:18:16.692625  602411 cni.go:84] Creating CNI manager for ""
	I1213 19:18:16.692700  602411 cni.go:143] "docker" driver + "crio" runtime found, recommending kindnet
	I1213 19:18:16.692722  602411 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I1213 19:18:16.692817  602411 start.go:340] cluster config:
	{Name:download-only-161886 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:2200 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:download-only-161886 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:18:16.695234  602411 out.go:97] Starting "download-only-161886" primary control-plane node in "download-only-161886" cluster
	I1213 19:18:16.695270  602411 cache.go:121] Beginning downloading kic base image for docker with crio
	I1213 19:18:16.698112  602411 out.go:97] Pulling base image v0.0.45-1734029593-20090 ...
	I1213 19:18:16.698147  602411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:16.698202  602411 image.go:79] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local docker daemon
	I1213 19:18:16.714541  602411 cache.go:149] Downloading gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 to local cache
	I1213 19:18:16.714677  602411 image.go:63] Checking for gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory
	I1213 19:18:16.714703  602411 image.go:66] Found gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 in local cache directory, skipping pull
	I1213 19:18:16.714715  602411 image.go:135] gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 exists in cache, skipping pull
	I1213 19:18:16.714723  602411 cache.go:152] successfully saved gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 as a tarball
	I1213 19:18:16.755142  602411 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	I1213 19:18:16.755169  602411 cache.go:56] Caching tarball of preloaded images
	I1213 19:18:16.755349  602411 preload.go:131] Checking if preload exists for k8s version v1.31.2 and runtime crio
	I1213 19:18:16.757882  602411 out.go:97] Downloading Kubernetes v1.31.2 preload ...
	I1213 19:18:16.757926  602411 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4 ...
	I1213 19:18:16.851232  602411 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.2/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4?checksum=md5:810fe254d498dda367f4e14b5cba638f -> /home/jenkins/minikube-integration/20090-596807/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.2-cri-o-overlay-arm64.tar.lz4
	
	
	* The control-plane node download-only-161886 host does not exist
	  To start a cluster, run: "minikube start -p download-only-161886"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.2/LogsDuration (0.09s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-arm64 delete --all
--- PASS: TestDownloadOnly/v1.31.2/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-arm64 delete -p download-only-161886
--- PASS: TestDownloadOnly/v1.31.2/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestBinaryMirror (0.59s)

                                                
                                                
=== RUN   TestBinaryMirror
I1213 19:18:26.331721  602199 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.2/bin/linux/arm64/kubectl.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p binary-mirror-356185 --alsologtostderr --binary-mirror http://127.0.0.1:34457 --driver=docker  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-356185" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p binary-mirror-356185
--- PASS: TestBinaryMirror (0.59s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-248098
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-arm64 addons enable dashboard -p addons-248098: exit status 85 (80.748481ms)

                                                
                                                
-- stdout --
	* Profile "addons-248098" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-248098"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-248098
addons_test.go:950: (dbg) Non-zero exit: out/minikube-linux-arm64 addons disable dashboard -p addons-248098: exit status 85 (81.405435ms)

                                                
                                                
-- stdout --
	* Profile "addons-248098" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-248098"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.08s)

                                                
                                    
x
+
TestAddons/Setup (177.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-linux-arm64 start -p addons-248098 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-linux-arm64 start -p addons-248098 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=docker  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (2m57.753458248s)
--- PASS: TestAddons/Setup (177.75s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-248098 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-248098 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-248098 create -f testdata/busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-248098 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [af901830-743d-41d3-b3a4-35386238ea94] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [af901830-743d-41d3-b3a4-35386238ea94] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.010447689s
addons_test.go:633: (dbg) Run:  kubectl --context addons-248098 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-248098 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-248098 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-248098 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (9.91s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.87s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 13.336676ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-5cc95cd69-5n4c9" [7ec0f719-ff86-4cc0-9868-18a171b8d618] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 6.003676028s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-nvc8d" [c14eabdb-94a1-4ed0-8a97-51210e96f13a] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004464447s
addons_test.go:331: (dbg) Run:  kubectl --context addons-248098 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-248098 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-248098 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.68219488s)
addons_test.go:350: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 ip
2024/12/13 19:22:00 [DEBUG] GET http://192.168.49.2:5000
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.87s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-zz8jb" [4150ba1e-a840-4a93-8a68-cc53f5ee8c95] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.004206224s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable inspektor-gadget --alsologtostderr -v=1: (5.786996647s)
--- PASS: TestAddons/parallel/InspektorGadget (11.79s)

                                                
                                    
x
+
TestAddons/parallel/CSI (58.42s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I1213 19:22:19.438952  602199 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I1213 19:22:19.447360  602199 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I1213 19:22:19.447391  602199 kapi.go:107] duration metric: took 8.456939ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.468926ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-248098 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-248098 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [c2f6ff6c-6dcd-4925-84a2-572838643788] Pending
helpers_test.go:344: "task-pv-pod" [c2f6ff6c-6dcd-4925-84a2-572838643788] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [c2f6ff6c-6dcd-4925-84a2-572838643788] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 11.004448788s
addons_test.go:511: (dbg) Run:  kubectl --context addons-248098 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-248098 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-248098 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-248098 delete pod task-pv-pod
addons_test.go:527: (dbg) Run:  kubectl --context addons-248098 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-248098 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-248098 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [39ea072a-8858-4779-8d68-9d3b58c6b03a] Pending
helpers_test.go:344: "task-pv-pod-restore" [39ea072a-8858-4779-8d68-9d3b58c6b03a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [39ea072a-8858-4779-8d68-9d3b58c6b03a] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.003722849s
addons_test.go:553: (dbg) Run:  kubectl --context addons-248098 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-248098 delete pod task-pv-pod-restore: (1.104124313s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-248098 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-248098 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable volumesnapshots --alsologtostderr -v=1: (1.080794204s)
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.812047854s)
--- PASS: TestAddons/parallel/CSI (58.42s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (18.28s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-linux-arm64 addons enable headlamp -p addons-248098 --alsologtostderr -v=1
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-cd8ffd6fc-w4m42" [2792e0b5-847f-4e29-be91-2c3c13ce43b9] Pending
helpers_test.go:344: "headlamp-cd8ffd6fc-w4m42" [2792e0b5-847f-4e29-be91-2c3c13ce43b9] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-cd8ffd6fc-w4m42" [2792e0b5-847f-4e29-be91-2c3c13ce43b9] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 11.004341075s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable headlamp --alsologtostderr -v=1: (6.295981088s)
--- PASS: TestAddons/parallel/Headlamp (18.28s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-dc5db94f4-wsx8h" [47bc0641-2e90-4c59-b7e6-0c50b6a1fa23] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003842277s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable cloud-spanner --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CloudSpanner (5.76s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (10.74s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-248098 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-248098 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5bda62f5-44af-4552-b853-adb66d58cc17] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5bda62f5-44af-4552-b853-adb66d58cc17] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5bda62f5-44af-4552-b853-adb66d58cc17] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.004915533s
addons_test.go:906: (dbg) Run:  kubectl --context addons-248098 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 ssh "cat /opt/local-path-provisioner/pvc-3a3ae2c7-94c0-4b5c-a99c-675901123adf_default_test-pvc/file1"
addons_test.go:927: (dbg) Run:  kubectl --context addons-248098 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-248098 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (10.74s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-xsrsn" [bfc935e3-d013-494e-8380-5b4be1f7a0c9] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.004918924s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable nvidia-device-plugin --alsologtostderr -v=1
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (11.97s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-67d98fc6b-zc2wc" [52236ad5-aa93-478a-b972-2827a8f3fd36] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.008443209s
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-linux-arm64 -p addons-248098 addons disable yakd --alsologtostderr -v=1: (5.958540089s)
--- PASS: TestAddons/parallel/Yakd (11.97s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (12.19s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-linux-arm64 stop -p addons-248098
addons_test.go:170: (dbg) Done: out/minikube-linux-arm64 stop -p addons-248098: (11.886155627s)
addons_test.go:174: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p addons-248098
addons_test.go:178: (dbg) Run:  out/minikube-linux-arm64 addons disable dashboard -p addons-248098
addons_test.go:183: (dbg) Run:  out/minikube-linux-arm64 addons disable gvisor -p addons-248098
--- PASS: TestAddons/StoppedEnableDisable (12.19s)

                                                
                                    
x
+
TestCertOptions (40.46s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-arm64 start -p cert-options-581509 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-arm64 start -p cert-options-581509 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=docker  --container-runtime=crio: (37.675584481s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-arm64 -p cert-options-581509 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-581509 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-arm64 ssh -p cert-options-581509 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-581509" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-options-581509
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-options-581509: (2.068837669s)
--- PASS: TestCertOptions (40.46s)

                                                
                                    
x
+
TestCertExpiration (255.64s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-100897 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio
cert_options_test.go:123: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-100897 --memory=2048 --cert-expiration=3m --driver=docker  --container-runtime=crio: (42.571420793s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-arm64 start -p cert-expiration-100897 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-arm64 start -p cert-expiration-100897 --memory=2048 --cert-expiration=8760h --driver=docker  --container-runtime=crio: (30.826866018s)
helpers_test.go:175: Cleaning up "cert-expiration-100897" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cert-expiration-100897
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p cert-expiration-100897: (2.242655085s)
--- PASS: TestCertExpiration (255.64s)

                                                
                                    
x
+
TestForceSystemdFlag (38.61s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-flag-097888 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-flag-097888 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (35.541323653s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-arm64 -p force-systemd-flag-097888 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-097888" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-flag-097888
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-flag-097888: (2.668881718s)
--- PASS: TestForceSystemdFlag (38.61s)

                                                
                                    
x
+
TestForceSystemdEnv (43.5s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-arm64 start -p force-systemd-env-749800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio
E1213 20:08:45.383788  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
docker_test.go:155: (dbg) Done: out/minikube-linux-arm64 start -p force-systemd-env-749800 --memory=2048 --alsologtostderr -v=5 --driver=docker  --container-runtime=crio: (40.8181649s)
helpers_test.go:175: Cleaning up "force-systemd-env-749800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p force-systemd-env-749800
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p force-systemd-env-749800: (2.686412537s)
--- PASS: TestForceSystemdEnv (43.50s)

                                                
                                    
x
+
TestErrorSpam/setup (34.35s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-arm64 start -p nospam-019662 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-019662 --driver=docker  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-arm64 start -p nospam-019662 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-019662 --driver=docker  --container-runtime=crio: (34.347072315s)
--- PASS: TestErrorSpam/setup (34.35s)

                                                
                                    
x
+
TestErrorSpam/start (0.8s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 start --dry-run
--- PASS: TestErrorSpam/start (0.80s)

                                                
                                    
x
+
TestErrorSpam/status (1.08s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 status
--- PASS: TestErrorSpam/status (1.08s)

                                                
                                    
x
+
TestErrorSpam/pause (1.86s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 pause
--- PASS: TestErrorSpam/pause (1.86s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.92s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 unpause
--- PASS: TestErrorSpam/unpause (1.92s)

                                                
                                    
x
+
TestErrorSpam/stop (1.51s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 stop: (1.293721441s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 stop
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-arm64 -p nospam-019662 --log_dir /tmp/nospam-019662 stop
--- PASS: TestErrorSpam/stop (1.51s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1855: local sync path: /home/jenkins/minikube-integration/20090-596807/.minikube/files/etc/test/nested/copy/602199/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (53.18s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2234: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio
functional_test.go:2234: (dbg) Done: out/minikube-linux-arm64 start -p functional-355453 --memory=4000 --apiserver-port=8441 --wait=all --driver=docker  --container-runtime=crio: (53.178497873s)
--- PASS: TestFunctional/serial/StartWithProxy (53.18s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (25.77s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I1213 19:29:23.403459  602199 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
functional_test.go:659: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --alsologtostderr -v=8
functional_test.go:659: (dbg) Done: out/minikube-linux-arm64 start -p functional-355453 --alsologtostderr -v=8: (25.767744883s)
functional_test.go:663: soft start took 25.768868409s for "functional-355453" cluster.
I1213 19:29:49.171523  602199 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/SoftStart (25.77s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:681: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.06s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:696: (dbg) Run:  kubectl --context functional-355453 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (4.53s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:3.1
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:3.1: (1.496905708s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:3.3
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:3.3: (1.553766497s)
functional_test.go:1049: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:latest
functional_test.go:1049: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 cache add registry.k8s.io/pause:latest: (1.482959258s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (4.53s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1077: (dbg) Run:  docker build -t minikube-local-cache-test:functional-355453 /tmp/TestFunctionalserialCacheCmdcacheadd_local2927410639/001
functional_test.go:1089: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache add minikube-local-cache-test:functional-355453
functional_test.go:1094: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache delete minikube-local-cache-test:functional-355453
functional_test.go:1083: (dbg) Run:  docker rmi minikube-local-cache-test:functional-355453
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.45s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1102: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1110: (dbg) Run:  out/minikube-linux-arm64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1124: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (2.3s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1147: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1153: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (313.797548ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1158: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cache reload
functional_test.go:1158: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 cache reload: (1.309249103s)
functional_test.go:1163: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (2.30s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1172: (dbg) Run:  out/minikube-linux-arm64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:716: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 kubectl -- --context functional-355453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.14s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:741: (dbg) Run:  out/kubectl --context functional-355453 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.17s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (33.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:757: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:757: (dbg) Done: out/minikube-linux-arm64 start -p functional-355453 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (33.668925383s)
functional_test.go:761: restart took 33.669043251s for "functional-355453" cluster.
I1213 19:30:32.146337  602199 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestFunctional/serial/ExtraConfig (33.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:810: (dbg) Run:  kubectl --context functional-355453 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:825: etcd phase: Running
functional_test.go:835: etcd status: Ready
functional_test.go:825: kube-apiserver phase: Running
functional_test.go:835: kube-apiserver status: Ready
functional_test.go:825: kube-controller-manager phase: Running
functional_test.go:835: kube-controller-manager status: Ready
functional_test.go:825: kube-scheduler phase: Running
functional_test.go:835: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.11s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.84s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1236: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 logs
functional_test.go:1236: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 logs: (1.837970862s)
--- PASS: TestFunctional/serial/LogsCmd (1.84s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1250: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 logs --file /tmp/TestFunctionalserialLogsFileCmd3443848116/001/logs.txt
functional_test.go:1250: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 logs --file /tmp/TestFunctionalserialLogsFileCmd3443848116/001/logs.txt: (1.847761832s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.85s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.37s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2321: (dbg) Run:  kubectl --context functional-355453 apply -f testdata/invalidsvc.yaml
functional_test.go:2335: (dbg) Run:  out/minikube-linux-arm64 service invalid-svc -p functional-355453
functional_test.go:2335: (dbg) Non-zero exit: out/minikube-linux-arm64 service invalid-svc -p functional-355453: exit status 115 (692.632794ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.168.49.2:31175 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2327: (dbg) Run:  kubectl --context functional-355453 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.37s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 config get cpus: exit status 14 (88.863999ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config set cpus 2
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config get cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config unset cpus
functional_test.go:1199: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 config get cpus
functional_test.go:1199: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 config get cpus: exit status 14 (92.419342ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (15.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:905: (dbg) daemon: [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-355453 --alsologtostderr -v=1]
functional_test.go:910: (dbg) stopping [out/minikube-linux-arm64 dashboard --url --port 36195 -p functional-355453 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 630342: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (15.26s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:974: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:974: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-355453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (207.599796ms)

                                                
                                                
-- stdout --
	* [functional-355453] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:31:13.865306  630046 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:31:13.865452  630046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:13.865462  630046 out.go:358] Setting ErrFile to fd 2...
	I1213 19:31:13.865468  630046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:13.865739  630046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:31:13.866142  630046 out.go:352] Setting JSON to false
	I1213 19:31:13.867153  630046 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11590,"bootTime":1734106684,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:31:13.867239  630046 start.go:139] virtualization:  
	I1213 19:31:13.871898  630046 out.go:177] * [functional-355453] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1213 19:31:13.874374  630046 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:31:13.874500  630046 notify.go:220] Checking for updates...
	I1213 19:31:13.879639  630046 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:31:13.882498  630046 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:31:13.884911  630046 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:31:13.887131  630046 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:31:13.889807  630046 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:31:13.893040  630046 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:31:13.893686  630046 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:31:13.924207  630046 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:31:13.924389  630046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:31:13.985313  630046 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-13 19:31:13.975931884 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:31:13.985431  630046 docker.go:318] overlay module found
	I1213 19:31:13.989412  630046 out.go:177] * Using the docker driver based on existing profile
	I1213 19:31:13.992355  630046 start.go:297] selected driver: docker
	I1213 19:31:13.992395  630046 start.go:901] validating driver "docker" against &{Name:functional-355453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-355453 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:31:13.992541  630046 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:31:13.996146  630046 out.go:201] 
	W1213 19:31:13.998472  630046 out.go:270] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I1213 19:31:14.000503  630046 out.go:201] 

                                                
                                                
** /stderr **
functional_test.go:991: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --dry-run --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1020: (dbg) Run:  out/minikube-linux-arm64 start -p functional-355453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio
functional_test.go:1020: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p functional-355453 --dry-run --memory 250MB --alsologtostderr --driver=docker  --container-runtime=crio: exit status 23 (232.455504ms)

                                                
                                                
-- stdout --
	* [functional-355453] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote docker basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:31:13.644931  630000 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:31:13.645106  630000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:13.645115  630000 out.go:358] Setting ErrFile to fd 2...
	I1213 19:31:13.645122  630000 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:31:13.646741  630000 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:31:13.647196  630000 out.go:352] Setting JSON to false
	I1213 19:31:13.648154  630000 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":11590,"bootTime":1734106684,"procs":195,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 19:31:13.648244  630000 start.go:139] virtualization:  
	I1213 19:31:13.651146  630000 out.go:177] * [functional-355453] minikube v1.34.0 sur Ubuntu 20.04 (arm64)
	I1213 19:31:13.653492  630000 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 19:31:13.653609  630000 notify.go:220] Checking for updates...
	I1213 19:31:13.658699  630000 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 19:31:13.661361  630000 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 19:31:13.663790  630000 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 19:31:13.666382  630000 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 19:31:13.669232  630000 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 19:31:13.672686  630000 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:31:13.673242  630000 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 19:31:13.704007  630000 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 19:31:13.704136  630000 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:31:13.788643  630000 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:2 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:32 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-13 19:31:13.779059202 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:31:13.788758  630000 docker.go:318] overlay module found
	I1213 19:31:13.791364  630000 out.go:177] * Utilisation du pilote docker basé sur le profil existant
	I1213 19:31:13.794115  630000 start.go:297] selected driver: docker
	I1213 19:31:13.794138  630000 start.go:901] validating driver "docker" against &{Name:functional-355453 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.45-1734029593-20090@sha256:7b3f6168a578563fb342f21f0c926652b91ba060931e8fbc6c6ade3ac1d26ed9 Memory:4000 CPUs:2 DiskSize:20000 Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.2 ClusterName:functional-355453 Namespace:default APIServerHAVIP: APIServerNa
me:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.49.2 Port:8441 KubernetesVersion:v1.31.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP
: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I1213 19:31:13.794474  630000 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 19:31:13.798944  630000 out.go:201] 
	W1213 19:31:13.801875  630000 out.go:270] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I1213 19:31:13.804038  630000 out.go:201] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:854: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 status
functional_test.go:860: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:872: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (10.6s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1627: (dbg) Run:  kubectl --context functional-355453 create deployment hello-node-connect --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1635: (dbg) Run:  kubectl --context functional-355453 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-65d86f57f4-mjtl6" [51c58036-9c79-4c0f-a5e6-14cb433cba62] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-connect-65d86f57f4-mjtl6" [51c58036-9c79-4c0f-a5e6-14cb433cba62] Running
functional_test.go:1640: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 10.004100942s
functional_test.go:1649: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service hello-node-connect --url
functional_test.go:1655: found endpoint for hello-node-connect: http://192.168.49.2:31432
functional_test.go:1675: http://192.168.49.2:31432: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-65d86f57f4-mjtl6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.49.2:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.49.2:31432
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (10.60s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1690: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 addons list
functional_test.go:1702: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [3f970c15-720c-4ad1-bd04-4ce8f24df837] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.003729582s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-355453 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-355453 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-355453 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [8d7207d5-6b5f-4d76-b0b1-0a9048a77c7d] Pending
helpers_test.go:344: "sp-pod" [8d7207d5-6b5f-4d76-b0b1-0a9048a77c7d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [8d7207d5-6b5f-4d76-b0b1-0a9048a77c7d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.026835508s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-355453 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-355453 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-355453 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [46739cb9-33d7-4131-a6fe-266a0b60f7d6] Pending
helpers_test.go:344: "sp-pod" [46739cb9-33d7-4131-a6fe-266a0b60f7d6] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 6.004135714s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-355453 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (25.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1725: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "echo hello"
functional_test.go:1742: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.72s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (2.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh -n functional-355453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cp functional-355453:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd1743787831/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh -n functional-355453 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh -n functional-355453 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (2.44s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1929: Checking for existence of /etc/test/nested/copy/602199/hosts within VM
functional_test.go:1931: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /etc/test/nested/copy/602199/hosts"
functional_test.go:1936: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (2.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1972: Checking for existence of /etc/ssl/certs/602199.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /etc/ssl/certs/602199.pem"
functional_test.go:1972: Checking for existence of /usr/share/ca-certificates/602199.pem within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /usr/share/ca-certificates/602199.pem"
functional_test.go:1972: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1973: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/6021992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /etc/ssl/certs/6021992.pem"
functional_test.go:1999: Checking for existence of /usr/share/ca-certificates/6021992.pem within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /usr/share/ca-certificates/6021992.pem"
functional_test.go:1999: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2000: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (2.29s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:219: (dbg) Run:  kubectl --context functional-355453 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo systemctl is-active docker"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "sudo systemctl is-active docker": exit status 1 (369.351394ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2027: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo systemctl is-active containerd"
functional_test.go:2027: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "sudo systemctl is-active containerd": exit status 1 (376.561501ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2288: (dbg) Run:  out/minikube-linux-arm64 license
--- PASS: TestFunctional/parallel/License (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 627740: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.71s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-355453 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [84bb6d97-58ef-4b5e-9bfb-e9da934204ae] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [84bb6d97-58ef-4b5e-9bfb-e9da934204ae] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 10.004067267s
I1213 19:30:52.702033  602199 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (10.41s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-355453 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.103.11.142 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-linux-arm64 -p functional-355453 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1437: (dbg) Run:  kubectl --context functional-355453 create deployment hello-node --image=registry.k8s.io/echoserver-arm:1.8
functional_test.go:1445: (dbg) Run:  kubectl --context functional-355453 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-64b4f8f9ff-df28c" [0c43a393-9fd6-400d-8681-eb9322a66bcf] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver-arm]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver-arm])
helpers_test.go:344: "hello-node-64b4f8f9ff-df28c" [0c43a393-9fd6-400d-8681-eb9322a66bcf] Running
functional_test.go:1450: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 6.009430538s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (6.22s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1270: (dbg) Run:  out/minikube-linux-arm64 profile lis
functional_test.go:1275: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1310: (dbg) Run:  out/minikube-linux-arm64 profile list
functional_test.go:1315: Took "364.740798ms" to run "out/minikube-linux-arm64 profile list"
functional_test.go:1324: (dbg) Run:  out/minikube-linux-arm64 profile list -l
functional_test.go:1329: Took "67.839615ms" to run "out/minikube-linux-arm64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1361: (dbg) Run:  out/minikube-linux-arm64 profile list -o json
functional_test.go:1366: Took "368.863507ms" to run "out/minikube-linux-arm64 profile list -o json"
functional_test.go:1374: (dbg) Run:  out/minikube-linux-arm64 profile list -o json --light
functional_test.go:1379: Took "65.468976ms" to run "out/minikube-linux-arm64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (9.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdany-port3204627298/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1734118269145818886" to /tmp/TestFunctionalparallelMountCmdany-port3204627298/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1734118269145818886" to /tmp/TestFunctionalparallelMountCmdany-port3204627298/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1734118269145818886" to /tmp/TestFunctionalparallelMountCmdany-port3204627298/001/test-1734118269145818886
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (375.69966ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:31:09.521843  602199 retry.go:31] will retry after 261.589749ms: exit status 1
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Dec 13 19:31 created-by-test
-rw-r--r-- 1 docker docker 24 Dec 13 19:31 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Dec 13 19:31 test-1734118269145818886
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh cat /mount-9p/test-1734118269145818886
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-355453 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [a88753b3-547a-406a-a6b6-7fea0b129cb0] Pending
helpers_test.go:344: "busybox-mount" [a88753b3-547a-406a-a6b6-7fea0b129cb0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [a88753b3-547a-406a-a6b6-7fea0b129cb0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [a88753b3-547a-406a-a6b6-7fea0b129cb0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 6.004143114s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-355453 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdany-port3204627298/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (9.31s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1459: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1489: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service list -o json
functional_test.go:1494: Took "624.986166ms" to run "out/minikube-linux-arm64 -p functional-355453 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1509: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service --namespace=default --https --url hello-node
functional_test.go:1522: found endpoint: https://192.168.49.2:30813
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1540: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1559: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 service hello-node --url
functional_test.go:1565: found endpoint for hello-node: http://192.168.49.2:30813
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.43s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdspecific-port155912854/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (399.842782ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:31:18.855026  602199 retry.go:31] will retry after 273.771523ms: exit status 1
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdspecific-port155912854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "sudo umount -f /mount-9p": exit status 1 (451.798229ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-arm64 -p functional-355453 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdspecific-port155912854/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.24s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T" /mount1: exit status 1 (1.012645352s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
I1213 19:31:21.708866  602199 retry.go:31] will retry after 294.559054ms: exit status 1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-arm64 mount -p functional-355453 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-arm64 mount -p functional-355453 /tmp/TestFunctionalparallelMountCmdVerifyCleanup2017923580/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (2.66s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2256: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (1.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2270: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 version -o=json --components
functional_test.go:2270: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 version -o=json --components: (1.381459792s)
--- PASS: TestFunctional/parallel/Version/components (1.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls --format short --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355453 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.31.2
registry.k8s.io/kube-proxy:v1.31.2
registry.k8s.io/kube-controller-manager:v1.31.2
registry.k8s.io/kube-apiserver:v1.31.2
registry.k8s.io/etcd:3.5.15-0
registry.k8s.io/echoserver-arm:1.8
registry.k8s.io/coredns/coredns:v1.11.3
localhost/minikube-local-cache-test:functional-355453
localhost/kicbase/echo-server:functional-355453
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/kindest/kindnetd:v20241108-5c6d2daf
docker.io/kindest/kindnetd:v20241007-36f62932
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355453 image ls --format short --alsologtostderr:
I1213 19:31:33.061620  632872 out.go:345] Setting OutFile to fd 1 ...
I1213 19:31:33.061767  632872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.061779  632872 out.go:358] Setting ErrFile to fd 2...
I1213 19:31:33.061784  632872 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.062121  632872 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
I1213 19:31:33.063155  632872 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.063300  632872 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.064131  632872 cli_runner.go:164] Run: docker container inspect functional-355453 --format={{.State.Status}}
I1213 19:31:33.085496  632872 ssh_runner.go:195] Run: systemctl --version
I1213 19:31:33.085563  632872 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355453
I1213 19:31:33.106186  632872 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/functional-355453/id_rsa Username:docker}
I1213 19:31:33.207105  632872 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls --format table --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355453 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | alpine             | dba92e6b64886 | 58.3MB |
| registry.k8s.io/etcd                    | 3.5.15-0           | 27e3830e14027 | 140MB  |
| registry.k8s.io/kube-apiserver          | v1.31.2            | f9c26480f1e72 | 92.6MB |
| registry.k8s.io/kube-controller-manager | v1.31.2            | 9404aea098d9e | 87MB   |
| registry.k8s.io/pause                   | 3.10               | afb61768ce381 | 520kB  |
| registry.k8s.io/pause                   | 3.1                | 8057e0500773a | 529kB  |
| docker.io/kindest/kindnetd              | v20241007-36f62932 | 0bcd66b03df5f | 98.3MB |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 1611cd07b61d5 | 3.77MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | ba04bb24b9575 | 29MB   |
| localhost/minikube-local-cache-test     | functional-355453  | dbf138ac3dfd7 | 3.33kB |
| registry.k8s.io/kube-scheduler          | v1.31.2            | d6b061e73ae45 | 67MB   |
| docker.io/library/nginx                 | latest             | bdf62fd3a32f1 | 201MB  |
| registry.k8s.io/coredns/coredns         | v1.11.3            | 2f6c962e7b831 | 61.6MB |
| registry.k8s.io/pause                   | 3.3                | 3d18732f8686c | 487kB  |
| docker.io/kindest/kindnetd              | v20241108-5c6d2daf | 2be0bcf609c65 | 98.3MB |
| localhost/kicbase/echo-server           | functional-355453  | ce2d2cda2d858 | 4.79MB |
| registry.k8s.io/echoserver-arm          | 1.8                | 72565bf5bbedf | 87.5MB |
| registry.k8s.io/kube-proxy              | v1.31.2            | 021d242013305 | 96MB   |
| registry.k8s.io/pause                   | latest             | 8cb2091f603e7 | 246kB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355453 image ls --format table --alsologtostderr:
I1213 19:31:34.371026  633137 out.go:345] Setting OutFile to fd 1 ...
I1213 19:31:34.371192  633137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:34.371198  633137 out.go:358] Setting ErrFile to fd 2...
I1213 19:31:34.371203  633137 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:34.371569  633137 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
I1213 19:31:34.372600  633137 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:34.372745  633137 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:34.373472  633137 cli_runner.go:164] Run: docker container inspect functional-355453 --format={{.State.Status}}
I1213 19:31:34.392144  633137 ssh_runner.go:195] Run: systemctl --version
I1213 19:31:34.392206  633137 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355453
I1213 19:31:34.409498  633137 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/functional-355453/id_rsa Username:docker}
I1213 19:31:34.506956  633137 ssh_runner.go:195] Run: sudo crictl images --output json
E1213 19:31:35.832245  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls --format json --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355453 image ls --format json --alsologtostderr:
[{"id":"a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c","docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a"],"repoTags":[],"size":"42263767"},{"id":"dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f","repoDigests":["docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4","docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26"],"repoTags":["docker.io/library/nginx:alpine"],"size":"58293755"},{"id":"ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944"],
"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"29037500"},{"id":"72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb","repoDigests":["registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5"],"repoTags":["registry.k8s.io/echoserver-arm:1.8"],"size":"87536549"},{"id":"9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba","repoDigests":["registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752","registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.31.2"],"size":"86996294"},{"id":"bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f","repoDigests":["docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2","docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be"],
"repoTags":["docker.io/library/nginx:latest"],"size":"201166247"},{"id":"1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"3774172"},{"id":"ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17","repoDigests":["localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a"],"repoTags":["localhost/kicbase/echo-server:functional-355453"],"size":"4788229"},{"id":"27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da","repoDigests":["registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a","registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da"],"repoTags":["registry.k8s.io/etcd:3.5.
15-0"],"size":"139912446"},{"id":"afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8","repoDigests":["registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a"],"repoTags":["registry.k8s.io/pause:3.10"],"size":"519877"},{"id":"3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300","repoDigests":["registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"487479"},{"id":"0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa","repoDigests":["docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387","docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae"],"repoTags":["docker.io/kindest/kindnetd:v20241007-36f62932"],"size":"98291250"},{"id":"2be0bcf609c6573ee83e676c747f31bda661
ab2d4e039c51839e38fd258d2903","repoDigests":["docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e","docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108"],"repoTags":["docker.io/kindest/kindnetd:v20241108-5c6d2daf"],"size":"98274354"},{"id":"2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4","repoDigests":["registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e"],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.3"],"size":"61647114"},{"id":"021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba","repoDigests":["registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe","registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a"],"repoTags":["registry.k8s.io/k
ube-proxy:v1.31.2"],"size":"95952789"},{"id":"8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5","repoDigests":["registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"528622"},{"id":"8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a","repoDigests":["registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca"],"repoTags":["registry.k8s.io/pause:latest"],"size":"246070"},{"id":"20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf"],"repoTags":[],"size":"247562353"},{"id":"dbf138ac3dfd7426265c89e72b8d5e8d70f4f0fa2b740c37b5c16ab8e323e083","repoDigests":["localhost/minikube-local-cache-test@sha256:6928da7bfbce5bd043cf3e891da
7b21b2ed5959e72a65eeb7f19fb9c9312a77f"],"repoTags":["localhost/minikube-local-cache-test:functional-355453"],"size":"3330"},{"id":"f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270","repoDigests":["registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63","registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0"],"repoTags":["registry.k8s.io/kube-apiserver:v1.31.2"],"size":"92632544"},{"id":"d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a","repoDigests":["registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282","registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34"],"repoTags":["registry.k8s.io/kube-scheduler:v1.31.2"],"size":"67007814"}]
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355453 image ls --format json --alsologtostderr:
I1213 19:31:34.126380  633107 out.go:345] Setting OutFile to fd 1 ...
I1213 19:31:34.126605  633107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:34.126634  633107 out.go:358] Setting ErrFile to fd 2...
I1213 19:31:34.126652  633107 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:34.126970  633107 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
I1213 19:31:34.127712  633107 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:34.127901  633107 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:34.128436  633107 cli_runner.go:164] Run: docker container inspect functional-355453 --format={{.State.Status}}
I1213 19:31:34.150150  633107 ssh_runner.go:195] Run: systemctl --version
I1213 19:31:34.150200  633107 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355453
I1213 19:31:34.168141  633107 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/functional-355453/id_rsa Username:docker}
I1213 19:31:34.266932  633107 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls --format yaml --alsologtostderr
functional_test.go:266: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355453 image ls --format yaml --alsologtostderr:
- id: 3d18732f8686cc3c878055d99a05fa80289502fa496b36b6a0fe0f77206a7300
repoDigests:
- registry.k8s.io/pause@sha256:e59730b14890252c14f85976e22ab1c47ec28b111ffed407f34bca1b44447476
repoTags:
- registry.k8s.io/pause:3.3
size: "487479"
- id: ba04bb24b95753201135cbc420b233c1b0b9fa2e1fd21d28319c348c33fbcde6
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:0ba370588274b88531ab311a5d2e645d240a853555c1e58fd1dd428fc333c9d2
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "29037500"
- id: 9404aea098d9e80cb648d86c07d56130a1fe875ed7c2526251c2ae68a9bf07ba
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:a33795e8b0ff9923d1539331975c4e76e2a74090f9f82eca775e2390e4f20752
- registry.k8s.io/kube-controller-manager@sha256:b8d51076af39954cadc718ae40bd8a736ae5ad4e0654465ae91886cad3a9b602
repoTags:
- registry.k8s.io/kube-controller-manager:v1.31.2
size: "86996294"
- id: 021d2420133054f8835987db659750ff639ab6863776460264dd8025c06644ba
repoDigests:
- registry.k8s.io/kube-proxy@sha256:62128d752eb4a9162074697aba46adea4abb8aab2a53c992f20881365b61a4fe
- registry.k8s.io/kube-proxy@sha256:adabb2ce69fab82e04b441902489c8dd06f47122f00bc1062189f3cf477c795a
repoTags:
- registry.k8s.io/kube-proxy:v1.31.2
size: "95952789"
- id: afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8
repoDigests:
- registry.k8s.io/pause@sha256:e50b7059b633caf3c1449b8da680d11845cda4506b513ee7a2de00725f0a34a7
- registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a
repoTags:
- registry.k8s.io/pause:3.10
size: "519877"
- id: f9c26480f1e722a7d05d7f1bb339180b19f941b23bcc928208e362df04a61270
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:8e7caee5c8075d84ee5b93472bedf9cf21364da1d72d60d3de15dfa0d172ff63
- registry.k8s.io/kube-apiserver@sha256:9d12daaedff9677744993f247bfbe4950f3da8cfd38179b3c59ec66dc81dfbe0
repoTags:
- registry.k8s.io/kube-apiserver:v1.31.2
size: "92632544"
- id: d6b061e73ae454743cbfe0e3479aa23e4ed65c61d38b4408a1e7f3d3859dda8a
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:0f78992e985d0dbe65f3e7598943d34b725cd61a21ba92edf5ac29f0f2b61282
- registry.k8s.io/kube-scheduler@sha256:38def311c8c2668b4b3820de83cd518e0d1c32cda10e661163f957a87f92ca34
repoTags:
- registry.k8s.io/kube-scheduler:v1.31.2
size: "67007814"
- id: 8057e0500773a37cde2cff041eb13ebd68c748419a2fbfd1dfb5bf38696cc8e5
repoDigests:
- registry.k8s.io/pause@sha256:b0602c9f938379133ff8017007894b48c1112681c9468f82a1e4cbf8a4498b67
repoTags:
- registry.k8s.io/pause:3.1
size: "528622"
- id: bdf62fd3a32f1209270ede068b6e08450dfe125c79b1a8ba8f5685090023bf7f
repoDigests:
- docker.io/library/nginx@sha256:6d3e464bc399ce5b0cd6a165162deb5926803c1c0ae8a1983ba0a1982b97a7a2
- docker.io/library/nginx@sha256:fb197595ebe76b9c0c14ab68159fd3c08bd067ec62300583543f0ebda353b5be
repoTags:
- docker.io/library/nginx:latest
size: "201166247"
- id: dbf138ac3dfd7426265c89e72b8d5e8d70f4f0fa2b740c37b5c16ab8e323e083
repoDigests:
- localhost/minikube-local-cache-test@sha256:6928da7bfbce5bd043cf3e891da7b21b2ed5959e72a65eeb7f19fb9c9312a77f
repoTags:
- localhost/minikube-local-cache-test:functional-355453
size: "3330"
- id: 72565bf5bbedfb62e9d21afa2b1221b2c7a5e05b746dae33430bc550d3f87beb
repoDigests:
- registry.k8s.io/echoserver-arm@sha256:b33d4cdf6ed097f4e9b77b135d83a596ab73c6268b0342648818eb85f5edfdb5
repoTags:
- registry.k8s.io/echoserver-arm:1.8
size: "87536549"
- id: 27e3830e1402783674d8b594038967deea9d51f0d91b34c93c8f39d2f68af7da
repoDigests:
- registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a
- registry.k8s.io/etcd@sha256:e3ee3ca2dbaf511385000dbd54123629c71b6cfaabd469e658d76a116b7f43da
repoTags:
- registry.k8s.io/etcd:3.5.15-0
size: "139912446"
- id: 20b332c9a70d8516d849d1ac23eff5800cbb2f263d379f0ec11ee908db6b25a8
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:5c52c60663b473628bd98e4ffee7a747ef1f88d8c7bcee957b089fb3f61bdedf
repoTags: []
size: "247562353"
- id: dba92e6b6488643fe4f2e872e6e4f6c30948171890d0f2cb96f28c435352397f
repoDigests:
- docker.io/library/nginx@sha256:41523187cf7d7a2f2677a80609d9caa14388bf5c1fbca9c410ba3de602aaaab4
- docker.io/library/nginx@sha256:eff2df9ac0ef6c949886d040dc2037ee6576d76161249261982fb70458ae8c26
repoTags:
- docker.io/library/nginx:alpine
size: "58293755"
- id: 2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:31440a2bef59e2f1ffb600113b557103740ff851e27b0aef5b849f6e3ab994a6
- registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61647114"
- id: ce2d2cda2d858fdaea84129deb86d18e5dbf1c548f230b79fdca74cc91729d17
repoDigests:
- localhost/kicbase/echo-server@sha256:49260110d6ce1914d3de292ed370ee11a2e34ab577b97e6011d795cb13534d4a
repoTags:
- localhost/kicbase/echo-server:functional-355453
size: "4788229"
- id: 8cb2091f603e75187e2f6226c5901d12e00b1d1f778c6471ae4578e8a1c4724a
repoDigests:
- registry.k8s.io/pause@sha256:f5e31d44aa14d5669e030380b656463a7e45934c03994e72e3dbf83d4a645cca
repoTags:
- registry.k8s.io/pause:latest
size: "246070"
- id: 0bcd66b03df5f1498fba5b90226939f5993cfba4c8379438bd8e89f3b4a70baa
repoDigests:
- docker.io/kindest/kindnetd@sha256:a454aa48d8e10631411378503103b251e3f52856d8be2535efb73a92fa2c0387
- docker.io/kindest/kindnetd@sha256:b61c0e5ba940299ee811efe946ee83e509799ea7e0651e1b782e83a665b29bae
repoTags:
- docker.io/kindest/kindnetd:v20241007-36f62932
size: "98291250"
- id: 2be0bcf609c6573ee83e676c747f31bda661ab2d4e039c51839e38fd258d2903
repoDigests:
- docker.io/kindest/kindnetd@sha256:de216f6245e142905c8022d424959a65f798fcd26f5b7492b9c0b0391d735c3e
- docker.io/kindest/kindnetd@sha256:e35e1050b69dcd16eb021c3bf915bdd9a591d4274108ac374bd941043673c108
repoTags:
- docker.io/kindest/kindnetd:v20241108-5c6d2daf
size: "98274354"
- id: a422e0e982356f6c1cf0e5bb7b733363caae3992a07c99951fbcc73e58ed656a
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
- docker.io/kubernetesui/metrics-scraper@sha256:853c43f3cced687cb211708aa0024304a5adb33ec45ebf5915d318358822e09a
repoTags: []
size: "42263767"
- id: 1611cd07b61d57dbbfebe6db242513fd51e1c02d20ba08af17a45837d86a8a8c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:580b0aa58b210f512f818b7b7ef4f63c803f7a8cd6baf571b1462b79f7b7719e
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "3774172"

                                                
                                                
functional_test.go:269: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355453 image ls --format yaml --alsologtostderr:
I1213 19:31:33.843484  633038 out.go:345] Setting OutFile to fd 1 ...
I1213 19:31:33.843662  633038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.843668  633038 out.go:358] Setting ErrFile to fd 2...
I1213 19:31:33.843674  633038 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.843908  633038 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
I1213 19:31:33.844587  633038 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.844697  633038 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.845185  633038 cli_runner.go:164] Run: docker container inspect functional-355453 --format={{.State.Status}}
I1213 19:31:33.868283  633038 ssh_runner.go:195] Run: systemctl --version
I1213 19:31:33.868333  633038 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355453
I1213 19:31:33.900270  633038 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/functional-355453/id_rsa Username:docker}
I1213 19:31:34.015298  633038 ssh_runner.go:195] Run: sudo crictl images --output json
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:308: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 ssh pgrep buildkitd
functional_test.go:308: (dbg) Non-zero exit: out/minikube-linux-arm64 -p functional-355453 ssh pgrep buildkitd: exit status 1 (326.072696ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:315: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image build -t localhost/my-image:functional-355453 testdata/build --alsologtostderr
functional_test.go:315: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 image build -t localhost/my-image:functional-355453 testdata/build --alsologtostderr: (3.07670684s)
functional_test.go:320: (dbg) Stdout: out/minikube-linux-arm64 -p functional-355453 image build -t localhost/my-image:functional-355453 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> 539cf557e35
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-355453
--> a0a6f5b90c6
Successfully tagged localhost/my-image:functional-355453
a0a6f5b90c60c2f90b24f44457b0e55744ff8f3d9fd6aa11ee4031f5d86f3dfd
functional_test.go:323: (dbg) Stderr: out/minikube-linux-arm64 -p functional-355453 image build -t localhost/my-image:functional-355453 testdata/build --alsologtostderr:
I1213 19:31:33.646964  633009 out.go:345] Setting OutFile to fd 1 ...
I1213 19:31:33.647687  633009 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.647711  633009 out.go:358] Setting ErrFile to fd 2...
I1213 19:31:33.647719  633009 out.go:392] TERM=,COLORTERM=, which probably does not support color
I1213 19:31:33.648175  633009 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
I1213 19:31:33.649838  633009 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.650591  633009 config.go:182] Loaded profile config "functional-355453": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
I1213 19:31:33.651176  633009 cli_runner.go:164] Run: docker container inspect functional-355453 --format={{.State.Status}}
I1213 19:31:33.670022  633009 ssh_runner.go:195] Run: systemctl --version
I1213 19:31:33.670081  633009 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" functional-355453
I1213 19:31:33.691262  633009 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33522 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/functional-355453/id_rsa Username:docker}
I1213 19:31:33.791340  633009 build_images.go:161] Building image from path: /tmp/build.1564356978.tar
I1213 19:31:33.791409  633009 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I1213 19:31:33.801456  633009 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1564356978.tar
I1213 19:31:33.805440  633009 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1564356978.tar: stat -c "%s %y" /var/lib/minikube/build/build.1564356978.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1564356978.tar': No such file or directory
I1213 19:31:33.805483  633009 ssh_runner.go:362] scp /tmp/build.1564356978.tar --> /var/lib/minikube/build/build.1564356978.tar (3072 bytes)
I1213 19:31:33.831686  633009 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1564356978
I1213 19:31:33.841181  633009 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1564356978 -xf /var/lib/minikube/build/build.1564356978.tar
I1213 19:31:33.851903  633009 crio.go:315] Building image: /var/lib/minikube/build/build.1564356978
I1213 19:31:33.852005  633009 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-355453 /var/lib/minikube/build/build.1564356978 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying blob sha256:a01966dde7f8d5ba10b6d87e776c7c8fb5a5f6bfa678874bd28b33b1fc6dba34
Copying config sha256:71a676dd070f4b701c3272e566d84951362f1326ea07d5bbad119d1c4f6b3d02
Writing manifest to image destination
Storing signatures
I1213 19:31:36.632182  633009 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-355453 /var/lib/minikube/build/build.1564356978 --cgroup-manager=cgroupfs: (2.780148658s)
I1213 19:31:36.632267  633009 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1564356978
I1213 19:31:36.642444  633009 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1564356978.tar
I1213 19:31:36.651709  633009 build_images.go:217] Built localhost/my-image:functional-355453 from /tmp/build.1564356978.tar
I1213 19:31:36.651742  633009 build_images.go:133] succeeded building to: functional-355453
I1213 19:31:36.651747  633009 build_images.go:134] failed building to: 
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (3.67s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:342: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:347: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-355453
--- PASS: TestFunctional/parallel/ImageCommands/Setup (0.75s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:355: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image load --daemon kicbase/echo-server:functional-355453 --alsologtostderr
E1213 19:31:25.578213  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.584667  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.596119  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.617582  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.659238  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.741183  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:25.903130  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:31:26.224805  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:355: (dbg) Done: out/minikube-linux-arm64 -p functional-355453 image load --daemon kicbase/echo-server:functional-355453 --alsologtostderr: (1.135307348s)
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image load --daemon kicbase/echo-server:functional-355453 --alsologtostderr
E1213 19:31:26.866828  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:235: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:240: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-355453
functional_test.go:245: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image load --daemon kicbase/echo-server:functional-355453 --alsologtostderr
E1213 19:31:28.148890  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:380: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image save kicbase/echo-server:functional-355453 /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
2024/12/13 19:31:29 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.69s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2119: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:392: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image rm kicbase/echo-server:functional-355453 --alsologtostderr
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.95s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:409: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image load /home/jenkins/workspace/Docker_Linux_crio_arm64/echo-server-save.tar --alsologtostderr
E1213 19:31:30.710922  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
functional_test.go:451: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.04s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.7s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:419: (dbg) Run:  docker rmi kicbase/echo-server:functional-355453
functional_test.go:424: (dbg) Run:  out/minikube-linux-arm64 -p functional-355453 image save --daemon kicbase/echo-server:functional-355453 --alsologtostderr
functional_test.go:432: (dbg) Run:  docker image inspect localhost/kicbase/echo-server:functional-355453
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.70s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:190: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-355453
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:198: (dbg) Run:  docker rmi -f localhost/my-image:functional-355453
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:206: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-355453
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (180.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-arm64 start -p ha-989864 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:31:46.073553  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:32:06.555655  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:32:47.517744  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:34:09.439272  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-linux-arm64 start -p ha-989864 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (2m59.344070068s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (180.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (9.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-arm64 kubectl -p ha-989864 -- rollout status deployment/busybox: (6.682148919s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-6284d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-7fwsp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-hd5rb -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-6284d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-7fwsp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-hd5rb -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-6284d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-7fwsp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-hd5rb -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (9.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-6284d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-6284d -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-7fwsp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-7fwsp -- sh -c "ping -c 1 192.168.49.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-hd5rb -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-arm64 kubectl -p ha-989864 -- exec busybox-7dff88458-hd5rb -- sh -c "ping -c 1 192.168.49.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (36.22s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-989864 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-linux-arm64 node add -p ha-989864 -v=7 --alsologtostderr: (35.174508821s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr: (1.041672196s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (36.22s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-989864 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.12s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.074840888s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (1.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (20.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 status --output json -v=7 --alsologtostderr: (1.034637599s)
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp testdata/cp-test.txt ha-989864:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1007018089/001/cp-test_ha-989864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864:/home/docker/cp-test.txt ha-989864-m02:/home/docker/cp-test_ha-989864_ha-989864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test_ha-989864_ha-989864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864:/home/docker/cp-test.txt ha-989864-m03:/home/docker/cp-test_ha-989864_ha-989864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test_ha-989864_ha-989864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864:/home/docker/cp-test.txt ha-989864-m04:/home/docker/cp-test_ha-989864_ha-989864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test_ha-989864_ha-989864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp testdata/cp-test.txt ha-989864-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m02:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1007018089/001/cp-test_ha-989864-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m02:/home/docker/cp-test.txt ha-989864:/home/docker/cp-test_ha-989864-m02_ha-989864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test_ha-989864-m02_ha-989864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m02:/home/docker/cp-test.txt ha-989864-m03:/home/docker/cp-test_ha-989864-m02_ha-989864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test_ha-989864-m02_ha-989864-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m02:/home/docker/cp-test.txt ha-989864-m04:/home/docker/cp-test_ha-989864-m02_ha-989864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test_ha-989864-m02_ha-989864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp testdata/cp-test.txt ha-989864-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m03:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1007018089/001/cp-test_ha-989864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m03:/home/docker/cp-test.txt ha-989864:/home/docker/cp-test_ha-989864-m03_ha-989864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test_ha-989864-m03_ha-989864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m03:/home/docker/cp-test.txt ha-989864-m02:/home/docker/cp-test_ha-989864-m03_ha-989864-m02.txt
E1213 19:35:42.303317  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.313109  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.336603  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.359829  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.401580  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.482996  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:35:42.644580  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test.txt"
E1213 19:35:42.965952  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test_ha-989864-m03_ha-989864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m03:/home/docker/cp-test.txt ha-989864-m04:/home/docker/cp-test_ha-989864-m03_ha-989864-m04.txt
E1213 19:35:43.607268  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test_ha-989864-m03_ha-989864-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp testdata/cp-test.txt ha-989864-m04:/home/docker/cp-test.txt
E1213 19:35:44.888662  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m04:/home/docker/cp-test.txt /tmp/TestMultiControlPlaneserialCopyFile1007018089/001/cp-test_ha-989864-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m04:/home/docker/cp-test.txt ha-989864:/home/docker/cp-test_ha-989864-m04_ha-989864.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864 "sudo cat /home/docker/cp-test_ha-989864-m04_ha-989864.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m04:/home/docker/cp-test.txt ha-989864-m02:/home/docker/cp-test_ha-989864-m04_ha-989864-m02.txt
E1213 19:35:47.450445  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m02 "sudo cat /home/docker/cp-test_ha-989864-m04_ha-989864-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 cp ha-989864-m04:/home/docker/cp-test.txt ha-989864-m03:/home/docker/cp-test_ha-989864-m04_ha-989864-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 ssh -n ha-989864-m03 "sudo cat /home/docker/cp-test_ha-989864-m04_ha-989864-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (20.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 node stop m02 -v=7 --alsologtostderr
E1213 19:35:52.571985  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:365: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 node stop m02 -v=7 --alsologtostderr: (12.04397609s)
ha_test.go:371: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:371: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr: exit status 7 (798.966349ms)

                                                
                                                
-- stdout --
	ha-989864
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-989864-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-989864-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-989864-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:36:01.752583  649046 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:36:01.752772  649046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:36:01.752782  649046 out.go:358] Setting ErrFile to fd 2...
	I1213 19:36:01.752788  649046 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:36:01.753071  649046 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:36:01.753294  649046 out.go:352] Setting JSON to false
	I1213 19:36:01.753338  649046 mustload.go:65] Loading cluster: ha-989864
	I1213 19:36:01.753483  649046 notify.go:220] Checking for updates...
	I1213 19:36:01.753834  649046 config.go:182] Loaded profile config "ha-989864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:36:01.753857  649046 status.go:174] checking status of ha-989864 ...
	I1213 19:36:01.754478  649046 cli_runner.go:164] Run: docker container inspect ha-989864 --format={{.State.Status}}
	I1213 19:36:01.775036  649046 status.go:371] ha-989864 host status = "Running" (err=<nil>)
	I1213 19:36:01.775062  649046 host.go:66] Checking if "ha-989864" exists ...
	I1213 19:36:01.775394  649046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-989864
	I1213 19:36:01.802225  649046 host.go:66] Checking if "ha-989864" exists ...
	I1213 19:36:01.802546  649046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:36:01.802595  649046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-989864
	I1213 19:36:01.821407  649046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33527 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/ha-989864/id_rsa Username:docker}
	I1213 19:36:01.924312  649046 ssh_runner.go:195] Run: systemctl --version
	I1213 19:36:01.929033  649046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:36:01.943366  649046 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:36:02.028785  649046 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:4 ContainersRunning:3 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:53 OomKillDisable:true NGoroutines:71 SystemTime:2024-12-13 19:36:02.012307379 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:36:02.029590  649046 kubeconfig.go:125] found "ha-989864" server: "https://192.168.49.254:8443"
	I1213 19:36:02.029638  649046 api_server.go:166] Checking apiserver status ...
	I1213 19:36:02.029714  649046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:36:02.042296  649046 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1437/cgroup
	I1213 19:36:02.054028  649046 api_server.go:182] apiserver freezer: "12:freezer:/docker/ffa776755b05f3ab8409072919da8bf896d32fac235092318af65a7eb9dffe71/crio/crio-9da8c87852cdf531396242afd10fd9c45f97358e7e0d60f0dad5c48a3969f3f1"
	I1213 19:36:02.054111  649046 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/ffa776755b05f3ab8409072919da8bf896d32fac235092318af65a7eb9dffe71/crio/crio-9da8c87852cdf531396242afd10fd9c45f97358e7e0d60f0dad5c48a3969f3f1/freezer.state
	I1213 19:36:02.064992  649046 api_server.go:204] freezer state: "THAWED"
	I1213 19:36:02.065021  649046 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:36:02.072905  649046 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:36:02.072951  649046 status.go:463] ha-989864 apiserver status = Running (err=<nil>)
	I1213 19:36:02.072964  649046 status.go:176] ha-989864 status: &{Name:ha-989864 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:36:02.072986  649046 status.go:174] checking status of ha-989864-m02 ...
	I1213 19:36:02.073345  649046 cli_runner.go:164] Run: docker container inspect ha-989864-m02 --format={{.State.Status}}
	I1213 19:36:02.091250  649046 status.go:371] ha-989864-m02 host status = "Stopped" (err=<nil>)
	I1213 19:36:02.091274  649046 status.go:384] host is not running, skipping remaining checks
	I1213 19:36:02.091282  649046 status.go:176] ha-989864-m02 status: &{Name:ha-989864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:36:02.091303  649046 status.go:174] checking status of ha-989864-m03 ...
	I1213 19:36:02.091621  649046 cli_runner.go:164] Run: docker container inspect ha-989864-m03 --format={{.State.Status}}
	I1213 19:36:02.116771  649046 status.go:371] ha-989864-m03 host status = "Running" (err=<nil>)
	I1213 19:36:02.116803  649046 host.go:66] Checking if "ha-989864-m03" exists ...
	I1213 19:36:02.117134  649046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-989864-m03
	I1213 19:36:02.136355  649046 host.go:66] Checking if "ha-989864-m03" exists ...
	I1213 19:36:02.136712  649046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:36:02.136768  649046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-989864-m03
	I1213 19:36:02.157535  649046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33537 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/ha-989864-m03/id_rsa Username:docker}
	I1213 19:36:02.260149  649046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:36:02.273590  649046 kubeconfig.go:125] found "ha-989864" server: "https://192.168.49.254:8443"
	I1213 19:36:02.273625  649046 api_server.go:166] Checking apiserver status ...
	I1213 19:36:02.273691  649046 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:36:02.286070  649046 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1322/cgroup
	I1213 19:36:02.296184  649046 api_server.go:182] apiserver freezer: "12:freezer:/docker/08c775b39bde4cfcfcc1e09b414da81b9345cae140287eb2988fb1130af050ba/crio/crio-6eee60784e2d81595295b8b1aea6d7aad37e6adf032690fb4c8598cbab23f1eb"
	I1213 19:36:02.296255  649046 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/08c775b39bde4cfcfcc1e09b414da81b9345cae140287eb2988fb1130af050ba/crio/crio-6eee60784e2d81595295b8b1aea6d7aad37e6adf032690fb4c8598cbab23f1eb/freezer.state
	I1213 19:36:02.306520  649046 api_server.go:204] freezer state: "THAWED"
	I1213 19:36:02.306551  649046 api_server.go:253] Checking apiserver healthz at https://192.168.49.254:8443/healthz ...
	I1213 19:36:02.314579  649046 api_server.go:279] https://192.168.49.254:8443/healthz returned 200:
	ok
	I1213 19:36:02.314622  649046 status.go:463] ha-989864-m03 apiserver status = Running (err=<nil>)
	I1213 19:36:02.314632  649046 status.go:176] ha-989864-m03 status: &{Name:ha-989864-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:36:02.314652  649046 status.go:174] checking status of ha-989864-m04 ...
	I1213 19:36:02.314995  649046 cli_runner.go:164] Run: docker container inspect ha-989864-m04 --format={{.State.Status}}
	I1213 19:36:02.333451  649046 status.go:371] ha-989864-m04 host status = "Running" (err=<nil>)
	I1213 19:36:02.333483  649046 host.go:66] Checking if "ha-989864-m04" exists ...
	I1213 19:36:02.333878  649046 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" ha-989864-m04
	I1213 19:36:02.352513  649046 host.go:66] Checking if "ha-989864-m04" exists ...
	I1213 19:36:02.352846  649046 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:36:02.352898  649046 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" ha-989864-m04
	I1213 19:36:02.371254  649046 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33542 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/ha-989864-m04/id_rsa Username:docker}
	I1213 19:36:02.472143  649046 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:36:02.487716  649046 status.go:176] ha-989864-m04 status: &{Name:ha-989864-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (12.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
E1213 19:36:02.813550  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (32.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 node start m02 -v=7 --alsologtostderr
E1213 19:36:23.295230  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:36:25.577574  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:422: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 node start m02 -v=7 --alsologtostderr: (31.169675683s)
ha_test.go:430: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:430: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr: (1.364441213s)
ha_test.go:450: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (32.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.359100266s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (1.36s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (206.08s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:458: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-989864 -v=7 --alsologtostderr
ha_test.go:464: (dbg) Run:  out/minikube-linux-arm64 stop -p ha-989864 -v=7 --alsologtostderr
E1213 19:36:53.280955  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:37:04.257395  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:464: (dbg) Done: out/minikube-linux-arm64 stop -p ha-989864 -v=7 --alsologtostderr: (37.284981496s)
ha_test.go:469: (dbg) Run:  out/minikube-linux-arm64 start -p ha-989864 --wait=true -v=7 --alsologtostderr
E1213 19:38:26.179500  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:469: (dbg) Done: out/minikube-linux-arm64 start -p ha-989864 --wait=true -v=7 --alsologtostderr: (2m48.581936998s)
ha_test.go:474: (dbg) Run:  out/minikube-linux-arm64 node list -p ha-989864
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (206.08s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (12.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:489: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 node delete m03 -v=7 --alsologtostderr
ha_test.go:489: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 node delete m03 -v=7 --alsologtostderr: (11.840441587s)
ha_test.go:495: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:513: (dbg) Run:  kubectl get nodes
ha_test.go:521: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (12.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.78s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:533: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 stop -v=7 --alsologtostderr
E1213 19:40:42.299428  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:533: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 stop -v=7 --alsologtostderr: (35.716218684s)
ha_test.go:539: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:539: (dbg) Non-zero exit: out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr: exit status 7 (118.31416ms)

                                                
                                                
-- stdout --
	ha-989864
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-989864-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-989864-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:40:52.900445  663630 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:40:52.900848  663630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:40:52.900863  663630 out.go:358] Setting ErrFile to fd 2...
	I1213 19:40:52.900870  663630 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:40:52.901329  663630 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:40:52.901654  663630 out.go:352] Setting JSON to false
	I1213 19:40:52.901703  663630 mustload.go:65] Loading cluster: ha-989864
	I1213 19:40:52.902551  663630 config.go:182] Loaded profile config "ha-989864": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:40:52.902609  663630 status.go:174] checking status of ha-989864 ...
	I1213 19:40:52.903495  663630 cli_runner.go:164] Run: docker container inspect ha-989864 --format={{.State.Status}}
	I1213 19:40:52.905574  663630 notify.go:220] Checking for updates...
	I1213 19:40:52.922559  663630 status.go:371] ha-989864 host status = "Stopped" (err=<nil>)
	I1213 19:40:52.922579  663630 status.go:384] host is not running, skipping remaining checks
	I1213 19:40:52.922586  663630 status.go:176] ha-989864 status: &{Name:ha-989864 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:40:52.922610  663630 status.go:174] checking status of ha-989864-m02 ...
	I1213 19:40:52.922915  663630 cli_runner.go:164] Run: docker container inspect ha-989864-m02 --format={{.State.Status}}
	I1213 19:40:52.947891  663630 status.go:371] ha-989864-m02 host status = "Stopped" (err=<nil>)
	I1213 19:40:52.947935  663630 status.go:384] host is not running, skipping remaining checks
	I1213 19:40:52.947943  663630 status.go:176] ha-989864-m02 status: &{Name:ha-989864-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:40:52.947964  663630 status.go:174] checking status of ha-989864-m04 ...
	I1213 19:40:52.948288  663630 cli_runner.go:164] Run: docker container inspect ha-989864-m04 --format={{.State.Status}}
	I1213 19:40:52.968361  663630 status.go:371] ha-989864-m04 host status = "Stopped" (err=<nil>)
	I1213 19:40:52.968385  663630 status.go:384] host is not running, skipping remaining checks
	I1213 19:40:52.968393  663630 status.go:176] ha-989864-m04 status: &{Name:ha-989864-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (35.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (108.48s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:562: (dbg) Run:  out/minikube-linux-arm64 start -p ha-989864 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:41:10.020796  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 19:41:25.577595  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
ha_test.go:562: (dbg) Done: out/minikube-linux-arm64 start -p ha-989864 --wait=true -v=7 --alsologtostderr --driver=docker  --container-runtime=crio: (1m47.46413419s)
ha_test.go:568: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:586: (dbg) Run:  kubectl get nodes
ha_test.go:594: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (108.48s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.91s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:392: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.91s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:607: (dbg) Run:  out/minikube-linux-arm64 node add -p ha-989864 --control-plane -v=7 --alsologtostderr
ha_test.go:607: (dbg) Done: out/minikube-linux-arm64 node add -p ha-989864 --control-plane -v=7 --alsologtostderr: (1m14.826138319s)
ha_test.go:613: (dbg) Run:  out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr
ha_test.go:613: (dbg) Done: out/minikube-linux-arm64 -p ha-989864 status -v=7 --alsologtostderr: (1.011960455s)
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.84s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-linux-arm64 profile list --output json: (1.031466943s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (1.03s)

                                                
                                    
x
+
TestJSONOutput/start/Command (50.4s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-987564 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 start -p json-output-987564 --output=json --user=testUser --memory=2200 --wait=true --driver=docker  --container-runtime=crio: (50.392725291s)
--- PASS: TestJSONOutput/start/Command (50.40s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.78s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 pause -p json-output-987564 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.78s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.68s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 unpause -p json-output-987564 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.68s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (5.87s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-arm64 stop -p json-output-987564 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-arm64 stop -p json-output-987564 --output=json --user=testUser: (5.869365321s)
--- PASS: TestJSONOutput/stop/Command (5.87s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.24s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-arm64 start -p json-output-error-687563 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p json-output-error-687563 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (87.223979ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"a843448d-5878-4cf9-86cc-fd5a5cb87908","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-687563] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4a423b5a-1c49-423e-ab91-52191b6e008f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"8c00fe1a-ed63-4108-af1e-ce220e752160","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"23606364-0260-4127-8e9b-ab8ecaa0d1f6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig"}}
	{"specversion":"1.0","id":"185ddee8-fa71-4276-b9c1-d0e0560decdb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube"}}
	{"specversion":"1.0","id":"86712df3-9876-445a-bf31-a800a03bd9e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"f63131d9-6df6-425d-864b-372fcb6aacde","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"589595be-9904-4615-8dcf-0c28dc9470ce","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/arm64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-687563" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p json-output-error-687563
--- PASS: TestErrorJSONOutput (0.24s)

                                                
                                    
x
+
TestKicCustomNetwork/create_custom_network (41.63s)

                                                
                                                
=== RUN   TestKicCustomNetwork/create_custom_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-059196 --network=
E1213 19:45:42.302651  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-059196 --network=: (39.486101033s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-059196" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-059196
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-059196: (2.118394045s)
--- PASS: TestKicCustomNetwork/create_custom_network (41.63s)

                                                
                                    
x
+
TestKicCustomNetwork/use_default_bridge_network (36.22s)

                                                
                                                
=== RUN   TestKicCustomNetwork/use_default_bridge_network
kic_custom_network_test.go:57: (dbg) Run:  out/minikube-linux-arm64 start -p docker-network-646022 --network=bridge
kic_custom_network_test.go:57: (dbg) Done: out/minikube-linux-arm64 start -p docker-network-646022 --network=bridge: (34.186949953s)
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
helpers_test.go:175: Cleaning up "docker-network-646022" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p docker-network-646022
E1213 19:46:25.577696  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p docker-network-646022: (2.005850316s)
--- PASS: TestKicCustomNetwork/use_default_bridge_network (36.22s)

                                                
                                    
x
+
TestKicExistingNetwork (34.68s)

                                                
                                                
=== RUN   TestKicExistingNetwork
I1213 19:46:27.467598  602199 cli_runner.go:164] Run: docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
W1213 19:46:27.485164  602199 cli_runner.go:211] docker network inspect existing-network --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1
I1213 19:46:27.485251  602199 network_create.go:284] running [docker network inspect existing-network] to gather additional debugging logs...
I1213 19:46:27.485268  602199 cli_runner.go:164] Run: docker network inspect existing-network
W1213 19:46:27.501292  602199 cli_runner.go:211] docker network inspect existing-network returned with exit code 1
I1213 19:46:27.501321  602199 network_create.go:287] error running [docker network inspect existing-network]: docker network inspect existing-network: exit status 1
stdout:
[]

                                                
                                                
stderr:
Error response from daemon: network existing-network not found
I1213 19:46:27.501337  602199 network_create.go:289] output of [docker network inspect existing-network]: -- stdout --
[]

                                                
                                                
-- /stdout --
** stderr ** 
Error response from daemon: network existing-network not found

                                                
                                                
** /stderr **
I1213 19:46:27.501465  602199 cli_runner.go:164] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}"
I1213 19:46:27.520329  602199 network.go:211] skipping subnet 192.168.49.0/24 that is taken: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 IsPrivate:true Interface:{IfaceName:br-308de6e6a993 IfaceIPv4:192.168.49.1 IfaceMTU:1500 IfaceMAC:02:42:33:30:9e:c3} reservation:<nil>}
I1213 19:46:27.520789  602199 network.go:206] using free private subnet 192.168.58.0/24: &{IP:192.168.58.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.58.0/24 Gateway:192.168.58.1 ClientMin:192.168.58.2 ClientMax:192.168.58.254 Broadcast:192.168.58.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0x4001ed4e00}
I1213 19:46:27.520858  602199 network_create.go:124] attempt to create docker network existing-network 192.168.58.0/24 with gateway 192.168.58.1 and MTU of 1500 ...
I1213 19:46:27.520916  602199 cli_runner.go:164] Run: docker network create --driver=bridge --subnet=192.168.58.0/24 --gateway=192.168.58.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true --label=name.minikube.sigs.k8s.io=existing-network existing-network
I1213 19:46:27.593919  602199 network_create.go:108] docker network existing-network 192.168.58.0/24 created
kic_custom_network_test.go:150: (dbg) Run:  docker network ls --format {{.Name}}
kic_custom_network_test.go:93: (dbg) Run:  out/minikube-linux-arm64 start -p existing-network-948594 --network=existing-network
kic_custom_network_test.go:93: (dbg) Done: out/minikube-linux-arm64 start -p existing-network-948594 --network=existing-network: (32.279783961s)
helpers_test.go:175: Cleaning up "existing-network-948594" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p existing-network-948594
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p existing-network-948594: (2.241627506s)
I1213 19:47:02.132211  602199 cli_runner.go:164] Run: docker network ls --filter=label=existing-network --format {{.Name}}
--- PASS: TestKicExistingNetwork (34.68s)

                                                
                                    
x
+
TestKicCustomSubnet (36.01s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-subnet-857687 --subnet=192.168.60.0/24
kic_custom_network_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-subnet-857687 --subnet=192.168.60.0/24: (33.820724381s)
kic_custom_network_test.go:161: (dbg) Run:  docker network inspect custom-subnet-857687 --format "{{(index .IPAM.Config 0).Subnet}}"
helpers_test.go:175: Cleaning up "custom-subnet-857687" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p custom-subnet-857687
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p custom-subnet-857687: (2.168544176s)
--- PASS: TestKicCustomSubnet (36.01s)

                                                
                                    
x
+
TestKicStaticIP (35.49s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:132: (dbg) Run:  out/minikube-linux-arm64 start -p static-ip-580323 --static-ip=192.168.200.200
E1213 19:47:48.644879  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
kic_custom_network_test.go:132: (dbg) Done: out/minikube-linux-arm64 start -p static-ip-580323 --static-ip=192.168.200.200: (33.144115353s)
kic_custom_network_test.go:138: (dbg) Run:  out/minikube-linux-arm64 -p static-ip-580323 ip
helpers_test.go:175: Cleaning up "static-ip-580323" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p static-ip-580323
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p static-ip-580323: (2.175483059s)
--- PASS: TestKicStaticIP (35.49s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-arm64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (69.82s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p first-821873 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p first-821873 --driver=docker  --container-runtime=crio: (29.910956369s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p second-824253 --driver=docker  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p second-824253 --driver=docker  --container-runtime=crio: (34.040570088s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile first-821873
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-arm64 profile second-824253
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-arm64 profile list -ojson
helpers_test.go:175: Cleaning up "second-824253" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p second-824253
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p second-824253: (2.052131167s)
helpers_test.go:175: Cleaning up "first-821873" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p first-821873
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p first-821873: (2.349041153s)
--- PASS: TestMinikubeProfile (69.82s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (9.63s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-1-757499 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-1-757499 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (8.633822838s)
--- PASS: TestMountStart/serial/StartWithMountFirst (9.63s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-1-757499 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountFirst (0.26s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-759419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-759419 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=docker  --container-runtime=crio: (6.13442021s)
--- PASS: TestMountStart/serial/StartWithMountSecond (7.13s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-759419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountSecond (0.27s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p mount-start-1-757499 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p mount-start-1-757499 --alsologtostderr -v=5: (1.681224322s)
--- PASS: TestMountStart/serial/DeleteFirst (1.68s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-759419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.28s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.22s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-arm64 stop -p mount-start-2-759419
mount_start_test.go:155: (dbg) Done: out/minikube-linux-arm64 stop -p mount-start-2-759419: (1.218786383s)
--- PASS: TestMountStart/serial/Stop (1.22s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (8.53s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-arm64 start -p mount-start-2-759419
mount_start_test.go:166: (dbg) Done: out/minikube-linux-arm64 start -p mount-start-2-759419: (7.533405428s)
--- PASS: TestMountStart/serial/RestartStopped (8.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-arm64 -p mount-start-2-759419 ssh -- ls /minikube-host
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.27s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (80.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
E1213 19:50:42.298960  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048100 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (1m20.411582258s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (80.99s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (7.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-arm64 kubectl -p multinode-048100 -- rollout status deployment/busybox: (5.144741543s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-hd725 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-wmzvd -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-hd725 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-wmzvd -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-hd725 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-wmzvd -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (7.10s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-hd725 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-hd725 -- sh -c "ping -c 1 192.168.67.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-wmzvd -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-arm64 kubectl -p multinode-048100 -- exec busybox-7dff88458-wmzvd -- sh -c "ping -c 1 192.168.67.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (1.04s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (32.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-048100 -v 3 --alsologtostderr
E1213 19:51:25.577614  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-linux-arm64 node add -p multinode-048100 -v 3 --alsologtostderr: (32.120501095s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (32.86s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.1s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-048100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.10s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.71s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (11.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp testdata/cp-test.txt multinode-048100:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1957857395/001/cp-test_multinode-048100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100:/home/docker/cp-test.txt multinode-048100-m02:/home/docker/cp-test_multinode-048100_multinode-048100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test_multinode-048100_multinode-048100-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100:/home/docker/cp-test.txt multinode-048100-m03:/home/docker/cp-test_multinode-048100_multinode-048100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test_multinode-048100_multinode-048100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp testdata/cp-test.txt multinode-048100-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1957857395/001/cp-test_multinode-048100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m02:/home/docker/cp-test.txt multinode-048100:/home/docker/cp-test_multinode-048100-m02_multinode-048100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test_multinode-048100-m02_multinode-048100.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m02:/home/docker/cp-test.txt multinode-048100-m03:/home/docker/cp-test_multinode-048100-m02_multinode-048100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test_multinode-048100-m02_multinode-048100-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp testdata/cp-test.txt multinode-048100-m03:/home/docker/cp-test.txt
E1213 19:52:05.382233  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1957857395/001/cp-test_multinode-048100-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m03:/home/docker/cp-test.txt multinode-048100:/home/docker/cp-test_multinode-048100-m03_multinode-048100.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100 "sudo cat /home/docker/cp-test_multinode-048100-m03_multinode-048100.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 cp multinode-048100-m03:/home/docker/cp-test.txt multinode-048100-m02:/home/docker/cp-test_multinode-048100-m03_multinode-048100-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 ssh -n multinode-048100-m02 "sudo cat /home/docker/cp-test_multinode-048100-m03_multinode-048100-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (11.18s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.32s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-arm64 -p multinode-048100 node stop m03: (1.225629689s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048100 status: exit status 7 (563.926089ms)

                                                
                                                
-- stdout --
	multinode-048100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr: exit status 7 (530.847248ms)

                                                
                                                
-- stdout --
	multinode-048100
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-048100-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-048100-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:52:10.463783  717415 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:52:10.463966  717415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:52:10.463978  717415 out.go:358] Setting ErrFile to fd 2...
	I1213 19:52:10.463983  717415 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:52:10.464214  717415 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:52:10.464398  717415 out.go:352] Setting JSON to false
	I1213 19:52:10.464423  717415 mustload.go:65] Loading cluster: multinode-048100
	I1213 19:52:10.464540  717415 notify.go:220] Checking for updates...
	I1213 19:52:10.464839  717415 config.go:182] Loaded profile config "multinode-048100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:52:10.464855  717415 status.go:174] checking status of multinode-048100 ...
	I1213 19:52:10.465395  717415 cli_runner.go:164] Run: docker container inspect multinode-048100 --format={{.State.Status}}
	I1213 19:52:10.486563  717415 status.go:371] multinode-048100 host status = "Running" (err=<nil>)
	I1213 19:52:10.486589  717415 host.go:66] Checking if "multinode-048100" exists ...
	I1213 19:52:10.486926  717415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048100
	I1213 19:52:10.504929  717415 host.go:66] Checking if "multinode-048100" exists ...
	I1213 19:52:10.505245  717415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:52:10.505292  717415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048100
	I1213 19:52:10.530021  717415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33647 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/multinode-048100/id_rsa Username:docker}
	I1213 19:52:10.627651  717415 ssh_runner.go:195] Run: systemctl --version
	I1213 19:52:10.631940  717415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:52:10.643462  717415 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 19:52:10.702848  717415 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:3 ContainersRunning:2 ContainersPaused:0 ContainersStopped:1 Images:3 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:41 OomKillDisable:true NGoroutines:61 SystemTime:2024-12-13 19:52:10.693523684 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 19:52:10.703461  717415 kubeconfig.go:125] found "multinode-048100" server: "https://192.168.67.2:8443"
	I1213 19:52:10.703503  717415 api_server.go:166] Checking apiserver status ...
	I1213 19:52:10.703557  717415 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I1213 19:52:10.717340  717415 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1436/cgroup
	I1213 19:52:10.728673  717415 api_server.go:182] apiserver freezer: "12:freezer:/docker/6655c3d695abd19f2d628c63b1250de3e6a8630ebd4358f5596299d4b3298074/crio/crio-a9b9e21a19e22d59ba7257d8c45bd62a1647aa13a2173f763da07a31662b54bc"
	I1213 19:52:10.728747  717415 ssh_runner.go:195] Run: sudo cat /sys/fs/cgroup/freezer/docker/6655c3d695abd19f2d628c63b1250de3e6a8630ebd4358f5596299d4b3298074/crio/crio-a9b9e21a19e22d59ba7257d8c45bd62a1647aa13a2173f763da07a31662b54bc/freezer.state
	I1213 19:52:10.741190  717415 api_server.go:204] freezer state: "THAWED"
	I1213 19:52:10.741220  717415 api_server.go:253] Checking apiserver healthz at https://192.168.67.2:8443/healthz ...
	I1213 19:52:10.749139  717415 api_server.go:279] https://192.168.67.2:8443/healthz returned 200:
	ok
	I1213 19:52:10.749179  717415 status.go:463] multinode-048100 apiserver status = Running (err=<nil>)
	I1213 19:52:10.749195  717415 status.go:176] multinode-048100 status: &{Name:multinode-048100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:52:10.749218  717415 status.go:174] checking status of multinode-048100-m02 ...
	I1213 19:52:10.749531  717415 cli_runner.go:164] Run: docker container inspect multinode-048100-m02 --format={{.State.Status}}
	I1213 19:52:10.769144  717415 status.go:371] multinode-048100-m02 host status = "Running" (err=<nil>)
	I1213 19:52:10.769176  717415 host.go:66] Checking if "multinode-048100-m02" exists ...
	I1213 19:52:10.769588  717415 cli_runner.go:164] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" multinode-048100-m02
	I1213 19:52:10.787800  717415 host.go:66] Checking if "multinode-048100-m02" exists ...
	I1213 19:52:10.788127  717415 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I1213 19:52:10.788176  717415 cli_runner.go:164] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" multinode-048100-m02
	I1213 19:52:10.806196  717415 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:33652 SSHKeyPath:/home/jenkins/minikube-integration/20090-596807/.minikube/machines/multinode-048100-m02/id_rsa Username:docker}
	I1213 19:52:10.903228  717415 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I1213 19:52:10.914887  717415 status.go:176] multinode-048100-m02 status: &{Name:multinode-048100-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:52:10.914937  717415 status.go:174] checking status of multinode-048100-m03 ...
	I1213 19:52:10.915248  717415 cli_runner.go:164] Run: docker container inspect multinode-048100-m03 --format={{.State.Status}}
	I1213 19:52:10.932611  717415 status.go:371] multinode-048100-m03 host status = "Stopped" (err=<nil>)
	I1213 19:52:10.932633  717415 status.go:384] host is not running, skipping remaining checks
	I1213 19:52:10.932640  717415 status.go:176] multinode-048100-m03 status: &{Name:multinode-048100-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.32s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (10.15s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-arm64 -p multinode-048100 node start m03 -v=7 --alsologtostderr: (9.341943738s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (10.15s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (114.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048100
multinode_test.go:321: (dbg) Run:  out/minikube-linux-arm64 stop -p multinode-048100
multinode_test.go:321: (dbg) Done: out/minikube-linux-arm64 stop -p multinode-048100: (24.792861002s)
multinode_test.go:326: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048100 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048100 --wait=true -v=8 --alsologtostderr: (1m29.82843886s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048100
--- PASS: TestMultiNode/serial/RestartKeepsNodes (114.78s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-arm64 -p multinode-048100 node delete m03: (4.880599045s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (5.62s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 stop
multinode_test.go:345: (dbg) Done: out/minikube-linux-arm64 -p multinode-048100 stop: (23.734632397s)
multinode_test.go:351: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048100 status: exit status 7 (136.265976ms)

                                                
                                                
-- stdout --
	multinode-048100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr: exit status 7 (121.508487ms)

                                                
                                                
-- stdout --
	multinode-048100
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-048100-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 19:54:45.433195  725269 out.go:345] Setting OutFile to fd 1 ...
	I1213 19:54:45.433575  725269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:54:45.433591  725269 out.go:358] Setting ErrFile to fd 2...
	I1213 19:54:45.433599  725269 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 19:54:45.433922  725269 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 19:54:45.434214  725269 out.go:352] Setting JSON to false
	I1213 19:54:45.434319  725269 notify.go:220] Checking for updates...
	I1213 19:54:45.435230  725269 mustload.go:65] Loading cluster: multinode-048100
	I1213 19:54:45.435875  725269 config.go:182] Loaded profile config "multinode-048100": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 19:54:45.435916  725269 status.go:174] checking status of multinode-048100 ...
	I1213 19:54:45.437529  725269 cli_runner.go:164] Run: docker container inspect multinode-048100 --format={{.State.Status}}
	I1213 19:54:45.458618  725269 status.go:371] multinode-048100 host status = "Stopped" (err=<nil>)
	I1213 19:54:45.458645  725269 status.go:384] host is not running, skipping remaining checks
	I1213 19:54:45.458653  725269 status.go:176] multinode-048100 status: &{Name:multinode-048100 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I1213 19:54:45.458698  725269 status.go:174] checking status of multinode-048100-m02 ...
	I1213 19:54:45.459023  725269 cli_runner.go:164] Run: docker container inspect multinode-048100-m02 --format={{.State.Status}}
	I1213 19:54:45.488035  725269 status.go:371] multinode-048100-m02 host status = "Stopped" (err=<nil>)
	I1213 19:54:45.488060  725269 status.go:384] host is not running, skipping remaining checks
	I1213 19:54:45.488067  725269 status.go:176] multinode-048100-m02 status: &{Name:multinode-048100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (23.99s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (53.66s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048100 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048100 --wait=true -v=8 --alsologtostderr --driver=docker  --container-runtime=crio: (52.861534034s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-arm64 -p multinode-048100 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (53.66s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (33.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-arm64 node list -p multinode-048100
multinode_test.go:464: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048100-m02 --driver=docker  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p multinode-048100-m02 --driver=docker  --container-runtime=crio: exit status 14 (98.165982ms)

                                                
                                                
-- stdout --
	* [multinode-048100-m02] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-048100-m02' is duplicated with machine name 'multinode-048100-m02' in profile 'multinode-048100'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-arm64 start -p multinode-048100-m03 --driver=docker  --container-runtime=crio
E1213 19:55:42.299370  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
multinode_test.go:472: (dbg) Done: out/minikube-linux-arm64 start -p multinode-048100-m03 --driver=docker  --container-runtime=crio: (30.829338115s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-arm64 node add -p multinode-048100
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-arm64 node add -p multinode-048100: exit status 80 (358.411557ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-048100 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-048100-m03 already exists in multinode-048100-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-arm64 delete -p multinode-048100-m03
multinode_test.go:484: (dbg) Done: out/minikube-linux-arm64 delete -p multinode-048100-m03: (2.043848512s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (33.39s)

                                                
                                    
x
+
TestPreload (130.15s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-857876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4
E1213 19:56:25.577702  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-857876 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.24.4: (1m35.56230784s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-857876 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-arm64 -p test-preload-857876 image pull gcr.io/k8s-minikube/busybox: (3.507665053s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-arm64 stop -p test-preload-857876
preload_test.go:58: (dbg) Done: out/minikube-linux-arm64 stop -p test-preload-857876: (5.815514185s)
preload_test.go:66: (dbg) Run:  out/minikube-linux-arm64 start -p test-preload-857876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio
preload_test.go:66: (dbg) Done: out/minikube-linux-arm64 start -p test-preload-857876 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=docker  --container-runtime=crio: (22.514500754s)
preload_test.go:71: (dbg) Run:  out/minikube-linux-arm64 -p test-preload-857876 image list
helpers_test.go:175: Cleaning up "test-preload-857876" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p test-preload-857876
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p test-preload-857876: (2.433811321s)
--- PASS: TestPreload (130.15s)

                                                
                                    
x
+
TestScheduledStopUnix (108.79s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-arm64 start -p scheduled-stop-219106 --memory=2048 --driver=docker  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-arm64 start -p scheduled-stop-219106 --memory=2048 --driver=docker  --container-runtime=crio: (32.345971976s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-219106 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-arm64 status --format={{.TimeToStop}} -p scheduled-stop-219106 -n scheduled-stop-219106
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-219106 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
I1213 19:58:59.760420  602199 retry.go:31] will retry after 56.391µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.761623  602199 retry.go:31] will retry after 120.242µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.762901  602199 retry.go:31] will retry after 296.925µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.763983  602199 retry.go:31] will retry after 259.475µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.765085  602199 retry.go:31] will retry after 272.108µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.766213  602199 retry.go:31] will retry after 618.35µs: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.767321  602199 retry.go:31] will retry after 1.612562ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.769246  602199 retry.go:31] will retry after 2.395448ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.772508  602199 retry.go:31] will retry after 3.718871ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.776866  602199 retry.go:31] will retry after 3.421914ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.781060  602199 retry.go:31] will retry after 6.393778ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.788386  602199 retry.go:31] will retry after 10.435889ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.799666  602199 retry.go:31] will retry after 18.417353ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.818977  602199 retry.go:31] will retry after 10.418476ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.830212  602199 retry.go:31] will retry after 33.804237ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.866356  602199 retry.go:31] will retry after 27.848841ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
I1213 19:58:59.894534  602199 retry.go:31] will retry after 89.167618ms: open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/scheduled-stop-219106/pid: no such file or directory
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-219106 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-219106 -n scheduled-stop-219106
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-219106
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-arm64 stop -p scheduled-stop-219106 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-arm64 status -p scheduled-stop-219106
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p scheduled-stop-219106: exit status 7 (80.601883ms)

                                                
                                                
-- stdout --
	scheduled-stop-219106
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-219106 -n scheduled-stop-219106
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p scheduled-stop-219106 -n scheduled-stop-219106: exit status 7 (72.696301ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-219106" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p scheduled-stop-219106
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p scheduled-stop-219106: (4.513894965s)
--- PASS: TestScheduledStopUnix (108.79s)

                                                
                                    
x
+
TestInsufficientStorage (13.75s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:50: (dbg) Run:  out/minikube-linux-arm64 start -p insufficient-storage-315487 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio
status_test.go:50: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p insufficient-storage-315487 --memory=2048 --output=json --wait=true --driver=docker  --container-runtime=crio: exit status 26 (11.209741763s)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"dfd7d18d-9ef4-44f2-9e5a-ef74bb68b1e6","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[insufficient-storage-315487] minikube v1.34.0 on Ubuntu 20.04 (arm64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"4b48465b-2ada-45bb-ab3f-caf085614433","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20090"}}
	{"specversion":"1.0","id":"01d0f381-4729-4095-b861-2932eaa432f3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"54aae2c4-68a1-4685-b735-316c0140753f","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig"}}
	{"specversion":"1.0","id":"0f18f6b5-e75a-456c-a739-28527ce8357a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube"}}
	{"specversion":"1.0","id":"00e250de-8666-4c5b-b66e-1f7dbf6511e1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-arm64"}}
	{"specversion":"1.0","id":"be7dddca-c1e3-4f46-9b2b-0c5a3e417e57","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"986872f2-a41b-47f9-81c5-01635f7c6247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_STORAGE_CAPACITY=100"}}
	{"specversion":"1.0","id":"f694aabb-80e7-432a-8ee7-1051a617e1af","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_TEST_AVAILABLE_STORAGE=19"}}
	{"specversion":"1.0","id":"78fc833e-0adb-4e52-aa4e-f2e9a4d26048","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"1","message":"Using the docker driver based on user configuration","name":"Selecting Driver","totalsteps":"19"}}
	{"specversion":"1.0","id":"561e3e66-5daa-4f0e-80f7-992d63cabd47","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"Using Docker driver with root privileges"}}
	{"specversion":"1.0","id":"a5158174-b7f9-4f37-821d-e6d7f3e92622","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"3","message":"Starting \"insufficient-storage-315487\" primary control-plane node in \"insufficient-storage-315487\" cluster","name":"Starting Node","totalsteps":"19"}}
	{"specversion":"1.0","id":"527a60bb-f01a-42b7-95c0-3f31577f4491","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"5","message":"Pulling base image v0.0.45-1734029593-20090 ...","name":"Pulling Base Image","totalsteps":"19"}}
	{"specversion":"1.0","id":"4d5d6f6b-d54e-4dc9-b84c-c0d77e7819a9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"8","message":"Creating docker container (CPUs=2, Memory=2048MB) ...","name":"Creating Container","totalsteps":"19"}}
	{"specversion":"1.0","id":"f8e322de-c3ff-47a4-b607-59ea312a6961","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"Try one or more of the following to free up space on the device:\n\t\n\t\t\t1. Run \"docker system prune\" to remove unused Docker data (optionally with \"-a\")\n\t\t\t2. Increase the storage allocated to Docker for Desktop by clicking on:\n\t\t\t\tDocker icon \u003e Preferences \u003e Resources \u003e Disk Image Size\n\t\t\t3. Run \"minikube ssh -- docker system prune\" if using the Docker container runtime","exitcode":"26","issues":"https://github.com/kubernetes/minikube/issues/9024","message":"Docker is out of disk space! (/var is at 100% of capacity). You can pass '--force' to skip this check.","name":"RSRC_DOCKER_STORAGE","url":""}}

                                                
                                                
-- /stdout --
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-315487 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-315487 --output=json --layout=cluster: exit status 7 (305.833382ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-315487","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","Step":"Creating Container","StepDetail":"Creating docker container (CPUs=2, Memory=2048MB) ...","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-315487","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:00:27.104421  743161 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-315487" does not appear in /home/jenkins/minikube-integration/20090-596807/kubeconfig

                                                
                                                
** /stderr **
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p insufficient-storage-315487 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p insufficient-storage-315487 --output=json --layout=cluster: exit status 7 (303.85809ms)

                                                
                                                
-- stdout --
	{"Name":"insufficient-storage-315487","StatusCode":507,"StatusName":"InsufficientStorage","StatusDetail":"/var is almost out of disk space","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":500,"StatusName":"Error"}},"Nodes":[{"Name":"insufficient-storage-315487","StatusCode":507,"StatusName":"InsufficientStorage","Components":{"apiserver":{"Name":"apiserver","StatusCode":405,"StatusName":"Stopped"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
** stderr ** 
	E1213 20:00:27.410717  743223 status.go:458] kubeconfig endpoint: get endpoint: "insufficient-storage-315487" does not appear in /home/jenkins/minikube-integration/20090-596807/kubeconfig
	E1213 20:00:27.421395  743223 status.go:258] unable to read event log: stat: stat /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/insufficient-storage-315487/events.json: no such file or directory

                                                
                                                
** /stderr **
helpers_test.go:175: Cleaning up "insufficient-storage-315487" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p insufficient-storage-315487
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p insufficient-storage-315487: (1.931099382s)
--- PASS: TestInsufficientStorage (13.75s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (64.82s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.4042948994 start -p running-upgrade-601660 --memory=2200 --vm-driver=docker  --container-runtime=crio
E1213 20:05:42.313755  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.4042948994 start -p running-upgrade-601660 --memory=2200 --vm-driver=docker  --container-runtime=crio: (36.445571908s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-arm64 start -p running-upgrade-601660 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 20:06:25.577661  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-arm64 start -p running-upgrade-601660 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (24.521703728s)
helpers_test.go:175: Cleaning up "running-upgrade-601660" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p running-upgrade-601660
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p running-upgrade-601660: (2.975736604s)
--- PASS: TestRunningBinaryUpgrade (64.82s)

                                                
                                    
x
+
TestKubernetesUpgrade (391.1s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m14.06919172s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-arm64 stop -p kubernetes-upgrade-440345
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-arm64 stop -p kubernetes-upgrade-440345: (2.649642315s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-arm64 -p kubernetes-upgrade-440345 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-arm64 -p kubernetes-upgrade-440345 status --format={{.Host}}: exit status 7 (116.84612ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (4m36.790330731s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-440345 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.20.0 --driver=docker  --container-runtime=crio: exit status 106 (93.321577ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-440345] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-440345
	    minikube start -p kubernetes-upgrade-440345 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-4403452 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.2, by running:
	    
	    minikube start -p kubernetes-upgrade-440345 --kubernetes-version=v1.31.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-arm64 start -p kubernetes-upgrade-440345 --memory=2200 --kubernetes-version=v1.31.2 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (34.753509477s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-440345" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubernetes-upgrade-440345
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p kubernetes-upgrade-440345: (2.53590401s)
--- PASS: TestKubernetesUpgrade (391.10s)

                                                
                                    
x
+
TestMissingContainerUpgrade (185.94s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
=== PAUSE TestMissingContainerUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestMissingContainerUpgrade
version_upgrade_test.go:309: (dbg) Run:  /tmp/minikube-v1.26.0.1177250673 start -p missing-upgrade-888035 --memory=2200 --driver=docker  --container-runtime=crio
E1213 20:00:42.299547  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:309: (dbg) Done: /tmp/minikube-v1.26.0.1177250673 start -p missing-upgrade-888035 --memory=2200 --driver=docker  --container-runtime=crio: (1m29.879416457s)
version_upgrade_test.go:318: (dbg) Run:  docker stop missing-upgrade-888035
version_upgrade_test.go:318: (dbg) Done: docker stop missing-upgrade-888035: (10.440935119s)
version_upgrade_test.go:323: (dbg) Run:  docker rm missing-upgrade-888035
version_upgrade_test.go:329: (dbg) Run:  out/minikube-linux-arm64 start -p missing-upgrade-888035 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
version_upgrade_test.go:329: (dbg) Done: out/minikube-linux-arm64 start -p missing-upgrade-888035 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m22.060297096s)
helpers_test.go:175: Cleaning up "missing-upgrade-888035" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p missing-upgrade-888035
helpers_test.go:178: (dbg) Done: out/minikube-linux-arm64 delete -p missing-upgrade-888035: (2.043821617s)
--- PASS: TestMissingContainerUpgrade (185.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.1s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --kubernetes-version=1.20 --driver=docker  --container-runtime=crio: exit status 14 (95.135243ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-092288] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (38.94s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-092288 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-092288 --driver=docker  --container-runtime=crio: (38.393075353s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-092288 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (38.94s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (9.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --driver=docker  --container-runtime=crio
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --driver=docker  --container-runtime=crio: (7.06457s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-arm64 -p NoKubernetes-092288 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-arm64 -p NoKubernetes-092288 status -o json: exit status 2 (306.839482ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-092288","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-arm64 delete -p NoKubernetes-092288
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-arm64 delete -p NoKubernetes-092288: (2.015409664s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (9.39s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (9.76s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --driver=docker  --container-runtime=crio
E1213 20:01:25.577344  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-092288 --no-kubernetes --driver=docker  --container-runtime=crio: (9.757054444s)
--- PASS: TestNoKubernetes/serial/Start (9.76s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-092288 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-092288 "sudo systemctl is-active --quiet service kubelet": exit status 1 (278.162144ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-arm64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-arm64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (1.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-arm64 stop -p NoKubernetes-092288
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-arm64 stop -p NoKubernetes-092288: (1.282473168s)
--- PASS: TestNoKubernetes/serial/Stop (1.28s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (8.42s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-arm64 start -p NoKubernetes-092288 --driver=docker  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-arm64 start -p NoKubernetes-092288 --driver=docker  --container-runtime=crio: (8.416332678s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (8.42s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-arm64 ssh -p NoKubernetes-092288 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-arm64 ssh -p NoKubernetes-092288 "sudo systemctl is-active --quiet service kubelet": exit status 1 (363.665611ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.66s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (118.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2500688907 start -p stopped-upgrade-218579 --memory=2200 --vm-driver=docker  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2500688907 start -p stopped-upgrade-218579 --memory=2200 --vm-driver=docker  --container-runtime=crio: (37.30611895s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2500688907 -p stopped-upgrade-218579 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2500688907 -p stopped-upgrade-218579 stop: (2.628734856s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-arm64 start -p stopped-upgrade-218579 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
E1213 20:04:28.646626  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-arm64 start -p stopped-upgrade-218579 --memory=2200 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (1m18.518299712s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (118.46s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-arm64 logs -p stopped-upgrade-218579
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-arm64 logs -p stopped-upgrade-218579: (1.394205623s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.39s)

                                                
                                    
x
+
TestPause/serial/Start (56.83s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-arm64 start -p pause-086016 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-arm64 start -p pause-086016 --memory=2048 --install-addons=false --wait=all --driver=docker  --container-runtime=crio: (56.83473341s)
--- PASS: TestPause/serial/Start (56.83s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (29.4s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-arm64 start -p pause-086016 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-arm64 start -p pause-086016 --alsologtostderr -v=1 --driver=docker  --container-runtime=crio: (29.381997098s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (29.40s)

                                                
                                    
x
+
TestPause/serial/Pause (1.06s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-086016 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-086016 --alsologtostderr -v=5: (1.056515492s)
--- PASS: TestPause/serial/Pause (1.06s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.48s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-arm64 status -p pause-086016 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-arm64 status -p pause-086016 --output=json --layout=cluster: exit status 2 (479.657714ms)

                                                
                                                
-- stdout --
	{"Name":"pause-086016","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 7 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.34.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-086016","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.48s)

                                                
                                    
x
+
TestPause/serial/Unpause (1.04s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-arm64 unpause -p pause-086016 --alsologtostderr -v=5
pause_test.go:121: (dbg) Done: out/minikube-linux-arm64 unpause -p pause-086016 --alsologtostderr -v=5: (1.035592666s)
--- PASS: TestPause/serial/Unpause (1.04s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1.31s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-arm64 pause -p pause-086016 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-arm64 pause -p pause-086016 --alsologtostderr -v=5: (1.30865031s)
--- PASS: TestPause/serial/PauseAgain (1.31s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (4.91s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-arm64 delete -p pause-086016 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-arm64 delete -p pause-086016 --alsologtostderr -v=5: (4.910356317s)
--- PASS: TestPause/serial/DeletePaused (4.91s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-arm64 profile list --output json
pause_test.go:168: (dbg) Run:  docker ps -a
pause_test.go:173: (dbg) Run:  docker volume inspect pause-086016
pause_test.go:173: (dbg) Non-zero exit: docker volume inspect pause-086016: exit status 1 (35.926675ms)

                                                
                                                
-- stdout --
	[]

                                                
                                                
-- /stdout --
** stderr ** 
	Error response from daemon: get pause-086016: no such volume

                                                
                                                
** /stderr **
pause_test.go:178: (dbg) Run:  docker network ls
--- PASS: TestPause/serial/VerifyDeletedResources (0.62s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (4.91s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-arm64 start -p false-771336 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-arm64 start -p false-771336 --memory=2048 --alsologtostderr --cni=false --driver=docker  --container-runtime=crio: exit status 14 (282.533979ms)

                                                
                                                
-- stdout --
	* [false-771336] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	  - MINIKUBE_LOCATION=20090
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-arm64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the docker driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I1213 20:08:24.577792  782546 out.go:345] Setting OutFile to fd 1 ...
	I1213 20:08:24.578006  782546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:08:24.578030  782546 out.go:358] Setting ErrFile to fd 2...
	I1213 20:08:24.578060  782546 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I1213 20:08:24.578428  782546 root.go:338] Updating PATH: /home/jenkins/minikube-integration/20090-596807/.minikube/bin
	I1213 20:08:24.579082  782546 out.go:352] Setting JSON to false
	I1213 20:08:24.580311  782546 start.go:129] hostinfo: {"hostname":"ip-172-31-31-251","uptime":13820,"bootTime":1734106684,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1072-aws","kernelArch":"aarch64","virtualizationSystem":"","virtualizationRole":"","hostId":"982e3628-3742-4b3e-bb63-ac1b07660ec7"}
	I1213 20:08:24.580403  782546 start.go:139] virtualization:  
	I1213 20:08:24.583593  782546 out.go:177] * [false-771336] minikube v1.34.0 on Ubuntu 20.04 (arm64)
	I1213 20:08:24.586707  782546 out.go:177]   - MINIKUBE_LOCATION=20090
	I1213 20:08:24.586772  782546 notify.go:220] Checking for updates...
	I1213 20:08:24.591183  782546 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I1213 20:08:24.593045  782546 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/20090-596807/kubeconfig
	I1213 20:08:24.594695  782546 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/20090-596807/.minikube
	I1213 20:08:24.596669  782546 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-arm64
	I1213 20:08:24.598555  782546 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I1213 20:08:24.601319  782546 config.go:182] Loaded profile config "force-systemd-flag-097888": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
	I1213 20:08:24.601470  782546 driver.go:394] Setting default libvirt URI to qemu:///system
	I1213 20:08:24.656105  782546 docker.go:123] docker version: linux-27.4.0:Docker Engine - Community
	I1213 20:08:24.656234  782546 cli_runner.go:164] Run: docker system info --format "{{json .}}"
	I1213 20:08:24.759522  782546 info.go:266] docker info: {ID:EOU5:DNGX:XN6V:L2FZ:UXRM:5TWK:EVUR:KC2F:GT7Z:Y4O4:GB77:5PD3 Containers:1 ContainersRunning:1 ContainersPaused:0 ContainersStopped:0 Images:4 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Using metacopy false] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:false KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:false BridgeNfIP6Tables:false Debug:false NFd:36 OomKillDisable:true NGoroutines:52 SystemTime:2024-12-13 20:08:24.747244237 +0000 UTC LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.15.0-1072-aws OperatingSystem:Ubuntu 20.04.6 LTS OSType:linux Architecture:a
arch64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:8214831104 GenericResources:<nil> DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:ip-172-31-31-251 Labels:[] ExperimentalBuild:false ServerVersion:27.4.0 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:88bf19b2105c8b17560993bee28a01ddc2f97182 Expected:88bf19b2105c8b17560993bee28a01ddc2f97182} RuncCommit:{ID:v1.2.2-0-g7cb3632 Expected:v1.2.2-0-g7cb3632} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=builtin] ProductLicense: Warnings:[WARNING: bridge
-nf-call-iptables is disabled WARNING: bridge-nf-call-ip6tables is disabled] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.19.2] map[Name:compose Path:/usr/libexec/docker/cli-plugins/docker-compose SchemaVersion:0.1.0 ShortDescription:Docker Compose Vendor:Docker Inc. Version:v2.31.0]] Warnings:<nil>}}
	I1213 20:08:24.759667  782546 docker.go:318] overlay module found
	I1213 20:08:24.762191  782546 out.go:177] * Using the docker driver based on user configuration
	I1213 20:08:24.764985  782546 start.go:297] selected driver: docker
	I1213 20:08:24.765032  782546 start.go:901] validating driver "docker" against <nil>
	I1213 20:08:24.765054  782546 start.go:912] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I1213 20:08:24.768006  782546 out.go:201] 
	W1213 20:08:24.770315  782546 out.go:270] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I1213 20:08:24.772432  782546 out.go:201] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-771336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-771336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-771336"

                                                
                                                
----------------------- debugLogs end: false-771336 [took: 4.433381787s] --------------------------------
helpers_test.go:175: Cleaning up "false-771336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p false-771336
--- PASS: TestNetworkPlugins/group/false (4.91s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (191.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-994460 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
E1213 20:10:42.299336  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:11:25.576953  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-994460 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (3m11.388779449s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (191.39s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (52.54s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-601639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-601639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (52.538838609s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (52.54s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (11.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994460 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [6d09aee0-58ce-4bab-b8ac-d7698b3080ca] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [6d09aee0-58ce-4bab-b8ac-d7698b3080ca] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 11.003863347s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context old-k8s-version-994460 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (11.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-994460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p old-k8s-version-994460 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.493007057s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context old-k8s-version-994460 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (1.64s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p old-k8s-version-994460 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p old-k8s-version-994460 --alsologtostderr -v=3: (12.214308664s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (12.21s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994460 -n old-k8s-version-994460
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994460 -n old-k8s-version-994460: exit status 7 (105.186681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p old-k8s-version-994460 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (140.13s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p old-k8s-version-994460 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p old-k8s-version-994460 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.20.0: (2m19.742340683s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p old-k8s-version-994460 -n old-k8s-version-994460
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (140.13s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-601639 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fb497188-02ad-4a79-8623-a2b924e9c984] Pending
helpers_test.go:344: "busybox" [fb497188-02ad-4a79-8623-a2b924e9c984] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fb497188-02ad-4a79-8623-a2b924e9c984] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 10.004120347s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context embed-certs-601639 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (10.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-601639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p embed-certs-601639 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.233596301s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context embed-certs-601639 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.35s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p embed-certs-601639 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p embed-certs-601639 --alsologtostderr -v=3: (12.153381038s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (12.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601639 -n embed-certs-601639
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601639 -n embed-certs-601639: exit status 7 (109.897128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p embed-certs-601639 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (306.86s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p embed-certs-601639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:15:42.298802  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p embed-certs-601639 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (5m6.472350889s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p embed-certs-601639 -n embed-certs-601639
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (306.86s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.04s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5jqs8" [f4f18473-6798-4a89-8ee4-0d39fe5d0f8e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.033761156s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.04s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-5jqs8" [f4f18473-6798-4a89-8ee4-0d39fe5d0f8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004694877s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context old-k8s-version-994460 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (6.10s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p old-k8s-version-994460 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20240202-8f1494ea
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.27s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p old-k8s-version-994460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994460 -n old-k8s-version-994460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994460 -n old-k8s-version-994460: exit status 2 (358.357321ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994460 -n old-k8s-version-994460
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994460 -n old-k8s-version-994460: exit status 2 (370.581348ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p old-k8s-version-994460 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p old-k8s-version-994460 -n old-k8s-version-994460
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p old-k8s-version-994460 -n old-k8s-version-994460
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (3.11s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (66.67s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-403750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:16:25.577644  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-403750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (1m6.667805294s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (66.67s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-403750 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [3bf01253-5bd4-470a-9ab9-ed47d2e35b45] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [3bf01253-5bd4-470a-9ab9-ed47d2e35b45] Running
start_stop_delete_test.go:194: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 10.00424727s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context no-preload-403750 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (10.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p no-preload-403750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p no-preload-403750 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.053631651s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context no-preload-403750 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.19s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p no-preload-403750 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p no-preload-403750 --alsologtostderr -v=3: (11.957694779s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (11.96s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-403750 -n no-preload-403750
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-403750 -n no-preload-403750: exit status 7 (210.469656ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p no-preload-403750 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.37s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (300.4s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p no-preload-403750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:18:10.573564  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.579940  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.591367  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.612712  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.654045  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.735446  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:10.897628  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:11.219480  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:11.861273  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:13.143537  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:15.705478  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:20.827584  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:31.069777  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:18:51.551187  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p no-preload-403750 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m59.930979032s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p no-preload-403750 -n no-preload-403750
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (300.40s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-svwz5" [19c07dc9-3152-48db-a430-8470eac71e83] Running
E1213 20:19:32.513204  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:272: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004449771s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-svwz5" [19c07dc9-3152-48db-a430-8470eac71e83] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004267816s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context embed-certs-601639 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p embed-certs-601639 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.26s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p embed-certs-601639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601639 -n embed-certs-601639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601639 -n embed-certs-601639: exit status 2 (437.799689ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601639 -n embed-certs-601639
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601639 -n embed-certs-601639: exit status 2 (337.4878ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p embed-certs-601639 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p embed-certs-601639 -n embed-certs-601639
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p embed-certs-601639 -n embed-certs-601639
--- PASS: TestStartStop/group/embed-certs/serial/Pause (3.39s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.92s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-667864 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-667864 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (52.923875117s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (52.92s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.37s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-667864 create -f testdata/busybox.yaml
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [7ab824aa-376a-469f-9d6a-07f135f306b6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
E1213 20:20:42.301746  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "busybox" [7ab824aa-376a-469f-9d6a-07f135f306b6] Running
E1213 20:20:54.435490  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:194: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 14.003768821s
start_stop_delete_test.go:194: (dbg) Run:  kubectl --context default-k8s-diff-port-667864 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (14.37s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-667864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p default-k8s-diff-port-667864 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.10925125s)
start_stop_delete_test.go:213: (dbg) Run:  kubectl --context default-k8s-diff-port-667864 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p default-k8s-diff-port-667864 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p default-k8s-diff-port-667864 --alsologtostderr -v=3: (11.971347663s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (11.97s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864: exit status 7 (89.927091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p default-k8s-diff-port-667864 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (282.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p default-k8s-diff-port-667864 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:21:08.647863  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:21:25.577626  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p default-k8s-diff-port-667864 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (4m41.733392933s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (282.15s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7fp4n" [e769fbf3-870a-486b-b3e7-de533dda8209] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004560532s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.1s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-7fp4n" [e769fbf3-870a-486b-b3e7-de533dda8209] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003770141s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context no-preload-403750 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.10s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p no-preload-403750 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (3.71s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p no-preload-403750 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-403750 -n no-preload-403750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-403750 -n no-preload-403750: exit status 2 (363.613193ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-403750 -n no-preload-403750
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-403750 -n no-preload-403750: exit status 2 (336.638836ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p no-preload-403750 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p no-preload-403750 -n no-preload-403750
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p no-preload-403750 -n no-preload-403750
--- PASS: TestStartStop/group/no-preload/serial/Pause (3.71s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (37.9s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:184: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-241962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
E1213 20:23:10.572861  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:23:38.276896  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
start_stop_delete_test.go:184: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-241962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (37.90086806s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (37.90s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:203: (dbg) Run:  out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-241962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:203: (dbg) Done: out/minikube-linux-arm64 addons enable metrics-server -p newest-cni-241962 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.27348658s)
start_stop_delete_test.go:209: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.27s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:226: (dbg) Run:  out/minikube-linux-arm64 stop -p newest-cni-241962 --alsologtostderr -v=3
start_stop_delete_test.go:226: (dbg) Done: out/minikube-linux-arm64 stop -p newest-cni-241962 --alsologtostderr -v=3: (1.25820931s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (1.26s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:237: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-241962 -n newest-cni-241962
start_stop_delete_test.go:237: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-241962 -n newest-cni-241962: exit status 7 (83.695597ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:237: status error: exit status 7 (may be ok)
start_stop_delete_test.go:244: (dbg) Run:  out/minikube-linux-arm64 addons enable dashboard -p newest-cni-241962 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.22s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (17.58s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:254: (dbg) Run:  out/minikube-linux-arm64 start -p newest-cni-241962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2
start_stop_delete_test.go:254: (dbg) Done: out/minikube-linux-arm64 start -p newest-cni-241962 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=docker  --container-runtime=crio --kubernetes-version=v1.31.2: (17.071959259s)
start_stop_delete_test.go:260: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Host}} -p newest-cni-241962 -n newest-cni-241962
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (17.58s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:271: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:282: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p newest-cni-241962 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p newest-cni-241962 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-241962 -n newest-cni-241962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-241962 -n newest-cni-241962: exit status 2 (355.123204ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-241962 -n newest-cni-241962
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-241962 -n newest-cni-241962: exit status 2 (357.134926ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p newest-cni-241962 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p newest-cni-241962 -n newest-cni-241962
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p newest-cni-241962 -n newest-cni-241962
--- PASS: TestStartStop/group/newest-cni/serial/Pause (3.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (48.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p auto-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p auto-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=docker  --container-runtime=crio: (48.371108618s)
--- PASS: TestNetworkPlugins/group/auto/Start (48.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p auto-771336 "pgrep -a kubelet"
I1213 20:24:56.898403  602199 config.go:182] Loaded profile config "auto-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-9nfdn" [aad035bb-2615-41b6-94cb-1d122d083923] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-9nfdn" [aad035bb-2615-41b6-94cb-1d122d083923] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 10.004143171s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (10.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (51.94s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p kindnet-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio
E1213 20:25:42.298848  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p kindnet-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=docker  --container-runtime=crio: (51.944641711s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (51.94s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fgwhb" [d423ad45-ef27-4337-afb6-635a93756e8e] Running
start_stop_delete_test.go:272: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.003614648s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-695b96c756-fgwhb" [d423ad45-ef27-4337-afb6-635a93756e8e] Running
start_stop_delete_test.go:285: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004517972s
start_stop_delete_test.go:289: (dbg) Run:  kubectl --context default-k8s-diff-port-667864 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.11s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:302: (dbg) Run:  out/minikube-linux-arm64 -p default-k8s-diff-port-667864 image list --format=json
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241007-36f62932
start_stop_delete_test.go:302: Found non-minikube image: kindest/kindnetd:v20241108-5c6d2daf
start_stop_delete_test.go:302: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (4.97s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 pause -p default-k8s-diff-port-667864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 pause -p default-k8s-diff-port-667864 --alsologtostderr -v=1: (1.215447763s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864: exit status 2 (501.083775ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
start_stop_delete_test.go:309: (dbg) Non-zero exit: out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864: exit status 2 (482.026792ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:309: status error: exit status 2 (may be ok)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 unpause -p default-k8s-diff-port-667864 --alsologtostderr -v=1
start_stop_delete_test.go:309: (dbg) Done: out/minikube-linux-arm64 unpause -p default-k8s-diff-port-667864 --alsologtostderr -v=1: (1.286456651s)
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.APIServer}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
start_stop_delete_test.go:309: (dbg) Run:  out/minikube-linux-arm64 status --format={{.Kubelet}} -p default-k8s-diff-port-667864 -n default-k8s-diff-port-667864
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (4.97s)
E1213 20:30:38.163735  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/auto-771336/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.593002  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.599374  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.610719  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.632088  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.673720  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.755275  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:40.916843  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:41.239044  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:41.881315  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:42.299544  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/functional-355453/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:43.163555  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:30:45.725555  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (67.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p calico-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p calico-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=docker  --container-runtime=crio: (1m7.601722666s)
--- PASS: TestNetworkPlugins/group/calico/Start (67.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-gnjgh" [4154c0c9-530e-4a25-be3b-4566b99cab5a] Running
E1213 20:26:25.577453  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/addons-248098/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.0066927s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p kindnet-771336 "pgrep -a kubelet"
I1213 20:26:27.020897  602199 config.go:182] Loaded profile config "kindnet-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (14.33s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-h4fwm" [36b47683-b71f-482b-897a-b630313ae706] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-h4fwm" [36b47683-b71f-482b-897a-b630313ae706] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 14.005014778s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (14.33s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (63.58s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p custom-flannel-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p custom-flannel-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=docker  --container-runtime=crio: (1m3.584300925s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (63.58s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-zd58b" [73410c8c-81d8-447f-8f62-35195453d380] Running
E1213 20:27:21.716095  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:21.722837  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:21.734768  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:21.756588  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:21.798938  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:21.881099  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:22.043205  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:22.365267  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:23.006794  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.008358969s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.02s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p calico-771336 "pgrep -a kubelet"
I1213 20:27:23.862339  602199 config.go:182] Loaded profile config "calico-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (14.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-gzkwg" [bb00e4fc-1466-4d24-8f42-da1973deb69c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 20:27:24.288885  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:26.851133  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
E1213 20:27:31.973080  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/no-preload-403750/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-gzkwg" [bb00e4fc-1466-4d24-8f42-da1973deb69c] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 14.003865231s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (14.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (79.9s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p enable-default-cni-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio
E1213 20:28:10.572294  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/old-k8s-version-994460/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p enable-default-cni-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=docker  --container-runtime=crio: (1m19.896963686s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (79.90s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p custom-flannel-771336 "pgrep -a kubelet"
I1213 20:28:12.017805  602199 config.go:182] Loaded profile config "custom-flannel-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-2vslw" [d44d71bd-56d8-4a18-bcea-3029b063a8af] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-2vslw" [d44d71bd-56d8-4a18-bcea-3029b063a8af] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 12.004433897s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (12.36s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (55.47s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p flannel-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p flannel-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=docker  --container-runtime=crio: (55.474040493s)
--- PASS: TestNetworkPlugins/group/flannel/Start (55.47s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p enable-default-cni-771336 "pgrep -a kubelet"
I1213 20:29:24.469407  602199 config.go:182] Loaded profile config "enable-default-cni-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.48s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-g9dvn" [a7b3c9aa-ea78-4c7f-818f-49172410137a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-g9dvn" [a7b3c9aa-ea78-4c7f-818f-49172410137a] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.005046538s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-vsm6h" [e14bc8a7-ca2b-421d-8134-740772fc46d6] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005007315s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p flannel-771336 "pgrep -a kubelet"
I1213 20:29:52.976983  602199 config.go:182] Loaded profile config "flannel-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.39s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-hwb9j" [f3972f8c-f0f4-48d1-9fb0-1e847f0871c9] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6fc964789b-hwb9j" [f3972f8c-f0f4-48d1-9fb0-1e847f0871c9] Running
E1213 20:29:59.755130  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/auto-771336/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.004150794s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (49.51s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-arm64 start -p bridge-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio
E1213 20:30:02.317394  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/auto-771336/client.crt: no such file or directory" logger="UnhandledError"
net_test.go:112: (dbg) Done: out/minikube-linux-arm64 start -p bridge-771336 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=docker  --container-runtime=crio: (49.507634473s)
--- PASS: TestNetworkPlugins/group/bridge/Start (49.51s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.2s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.20s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-arm64 ssh -p bridge-771336 "pgrep -a kubelet"
I1213 20:30:49.685958  602199 config.go:182] Loaded profile config "bridge-771336": Driver=docker, ContainerRuntime=crio, KubernetesVersion=v1.31.2
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-771336 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6fc964789b-569ld" [525890ed-aa55-4f39-991a-890ce0668a03] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E1213 20:30:50.846890  602199 cert_rotation.go:171] "Unhandled Error" err="key failed with : open /home/jenkins/minikube-integration/20090-596807/.minikube/profiles/default-k8s-diff-port-667864/client.crt: no such file or directory" logger="UnhandledError"
helpers_test.go:344: "netcat-6fc964789b-569ld" [525890ed-aa55-4f39-991a-890ce0668a03] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.004122684s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.26s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.5s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-771336 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.50s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-771336 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.16s)

                                                
                                    

Test skip (31/330)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.31.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0.57s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:232: (dbg) Run:  out/minikube-linux-arm64 start --download-only -p download-docker-972085 --alsologtostderr --driver=docker  --container-runtime=crio
aaa_download_only_test.go:244: Skip for arm64 platform. See https://github.com/kubernetes/minikube/issues/10144
helpers_test.go:175: Cleaning up "download-docker-972085" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p download-docker-972085
--- SKIP: TestDownloadOnlyKic (0.57s)

                                                
                                    
x
+
TestOffline (0s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:35: skipping TestOffline - only docker runtime supported on arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestOffline (0.00s)

                                                
                                    
x
+
TestAddons/serial/Volcano (0.34s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:789: skipping: crio not supported
addons_test.go:992: (dbg) Run:  out/minikube-linux-arm64 -p addons-248098 addons disable volcano --alsologtostderr -v=1
--- SKIP: TestAddons/serial/Volcano (0.34s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio true linux arm64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:45: Skip if arm64. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1787: arm64 is not supported by mysql. Skip the test. See https://github.com/kubernetes/minikube/issues/10144
--- SKIP: TestFunctional/parallel/MySQL (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:463: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:550: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:101: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-672584" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p disable-driver-mounts-672584
--- SKIP: TestStartStop/group/disable-driver-mounts (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (4.89s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:629: 
----------------------- debugLogs start: kubenet-771336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-771336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-771336"

                                                
                                                
----------------------- debugLogs end: kubenet-771336 [took: 4.643726242s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-771336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p kubenet-771336
--- SKIP: TestNetworkPlugins/group/kubenet (4.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:629: 
----------------------- debugLogs start: cilium-771336 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-771336" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-771336

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-771336" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-771336"

                                                
                                                
----------------------- debugLogs end: cilium-771336 [took: 5.091020724s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-771336" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-arm64 delete -p cilium-771336
--- SKIP: TestNetworkPlugins/group/cilium (5.34s)

                                                
                                    
Copied to clipboard